IT Adding Value to Business

I have been working on a nightmare (yes, it is reporting – isn’t that always a nightmare? ;-0). One of the biggest issues, however, is the processes in place do not accurately reflect the business process. This got me thinking about how IT can add value to business by creating proper processes. That is the purpose of this blog entry. To make it more real world, I am going to show the concepts in the form of case studies.

Case Studies

Case Study #1: Programming GPS Units

A GPS tracking company had a need to program units. Initially, this was done by using a terminal program and a script. One of the engineers (equipment engineer, not software engineer) created a program to more quickly push the script to multiple units. The IT department inherited this program and was asked to make it more flexible.

The task at hand was to program the unit and track which script the unit was programmed with. In the process of programming, IT noticed two things:

  1. The unit scripts were essentially the same, with a few differences for each back end.
  2. The unit gave a lot of metadata back when it was programmed that was not being stored anywhere

Lesson 1: Storing Data and Metadata

When the system was envisioned, the unit scripts were “tokenized” and the metadata for each unit was stored in a database. While this was not seen as important at the time, it became an invaluable in troubleshooting units that were sent out. In some cases, the unit was programmed weeks before being shipped out and had an older script. In the interim between programming and shipping, a bug fix was introduced.

Prior to instituting this system, units would have to be pulled from vehicles and shipped back to the company, creating an expense for the company. With the script being tracked, along with the metadata, engineering could send out an over the air command to update the “firmware” to the latest specs. Another benefit to storing this metadata was being able to proactively set up a fix on units shipped out in bulk that had bugs. Since the unit numbers could be easily identified, the system could be set up with a “when this unit first reports in, send out a fix” type of command.

What IT was doing, in this case, was refactoring out duplication. The main difference between the normal refactoring out duplications, is the duplications here were not in the code. It is a fairly common practice, in good IT shops, to refactor out duplicates in code. For example, if you find two routines that contain data access code, you create a new routine that both of the original routines call. One quick example is when you edit data on a web form. The original bits will end up looking this this:

protected void Page_Load(object sender, EventArgs e)
        //Work to bind here

protected void EditButton_Click(object sender, EventArgs e)
    //Do processing of edit here
    //Do work to bind here


The fix is moving out the binding bits, more like this:

protected void Page_Load(object sender, EventArgs e)


protected void EditButton_Click(object sender, EventArgs e)
    //Do processing of edit here

protected void BindPage()
    //Do work to bind here

The scripts were a bit different in the fact they were just AT commands send to a unit, but the original might look like this:

AT$Friend =
AT$UDPAPI = 12873

Refactored it would look like this:

AT$Friend = {IP_Address}

The process is the same as refactoring code, but the metadata is then stored as a set in a database and the unit number is tagged with that set.

Summary of lesson: Refactoring is as important for business processes and data as it is for code.

NOTE: I will have to cover IT that does not refactor code on another day. 😉

Lesson 2: Store Information Returned From Other Systems

When a unit was programmed, it returned certain data, such as the manufacturer’s revision number, manufacturer’s firmware set, etc. Prior to IT working on the programmer, none of this data was stored. And, it was not considered important, for some reason. After the programmer application was coded to store the data, the manufacturer discovered a problem with their own firmware that had to be fixed. Units on the shelf could then be identified as needing the fix and upgraded selectively, rather than attempt to upgrade firmware on all units. It also allowed the company to selectively recall units based on firmware revision, rather than recall all units of that type purchased within a certain time period (the only other way of determining faulty units).

Summary of lesson: One thing I have learned in my 15 years of IT is business often does not realize what it is getting from other systems. Data is often flagged as not important without fully ascertaining the business value of the data. I am not suggesting you go around business and store data they have demanded you not store, but rather that you do the following:

  • Document any data that comes back from a system and make a case for storing any data that might be important (now or in the future). When doing initial design, store the data. You can always drop it out if it is determined to be unimportant.
  • Keep a document trail on any data business flags as trash. This is a cover your butt maneuver, of course, but the blame game is very common when data is pitched and IT often gets the blame.

Case Study #2: Financial Reporting

A credit card processing company needed some financial data to help market their portfolio. The data came from end of month processes. In the process of reporting, the numbers were found to be off in many cases. Looking further, it was discovered that there was no firm way to track what had been reported, making it nearly impossible to get a completely accurate set of numbers.

Lesson #1: Flag Data Used in Reporting

Transactional data and reporting data are two different things. An individual transaction is stored to make sure every process related to the transaction can be repeated. But, when one reports from transactional data, there is a potential of not being able to repeat a report. In financial reporting, one potential issue is inherent in the process. NOTE: I probably have the names of the bits wrong here, but I am looking at this as a non-financial expert.

  • Merchant swipes card
  • Record sent from terminal to company responsible for processing the terminal (terminal processor)
  • Record is sent from terminal processor to authorization processor (may be same company)
  • Record is sent to processor for credit card company (example: MasterCard or Visa)
  • Credit limit is checked from card originator
  • Authorization number issued sent back to authorization processor
  • Authorization number is sent from authorization processor back to terminal processor
  • Authorization number sent back to terminal
  • At end of day, all records are settled from the terminal
  • Batch settlement sent to terminal processor
  • Receipt sent back to terminal
  • Batch settlement is sent to settlement processor (may be same as authorization processor)
  • At a later time, the settlement processor combines all settlement batches
  • Settlement batch sent to ACH processor
  • Money transferred from credit account to merchant account (and vice versa – in case of chargebacks, which is generally in the form of disputed transactions or returned merchandise)

This is a bit of an oversimplification of the process.

Now, at the end of the month, all of the processing companies send out bills. The terminal processor has to bill the merchant (which is done via ACH from the merchant account, but that is not really important). What is important is that the monthly bill be sent out to the merchant.

What is normally done is records between X and Y dates are processed to make up the bills for the merchants. The problem with this approach is a merchant can forget to settle a batch at night. If the batch was created on 6/30 and he comes in on the morning of 7/1, the monthly process may have already been run for billing prior to the records entering the system. If you rerun the report, it will include these records, however.

The solution to this problem is to add some type of flag to the system to indicate it has been run on a particular month’s month end process. This makes a for a repeatable process, as you report off records flag, not the date range. This means on the first reporting cycle, you have a pre-process that flags the records to be included. If business asks for a re-run to completely recalculate the numbers, you reflag the records to include any the merchant has processed late. But if they just want the numbers rerun for other reports, you respect the original flag. You now have a repeatable process.

Summary of Lesson: It is up to IT to include flags in processes that have consolidated reports. This ensures the same transactions can be used to repeat the reporting process, as well as report other numbers back to business. Once again, if business states this is not important, make sure you document the statements, as this one will come back to bite you some day.

Create reset scripts

I have a particular case in mind for this, but I am not sure setting this one up as a case study makes sense. Here is the background:

I am currently setting up some financial reports based on month end data (consolidated data, not the original transactions). There are some numbers that are run from other data points. In particular, I have to determine the expenses related to transactions. To make things easier, I have pulled the data from a variety of Access databases (don’t ask) into a single SQL Server table. I then added some columns to store the calculated data.

Example: As numerous scripts are run to get the totals, I have set up the following script so I can start back at square one when the calculations are revised. The X, Y and Z are obfuscations of the actual column names, as is EndOfMonthTable (no need giving away any secret sauce ;-0):

DROP TrueXCount, TrueYCount, XAuthExpense, YAuthExpense
     , ZAuthExpense, TotalAuthExpense

    TrueXCount    int default 0 not null
    , TrueYCount    int default 0 not null
    , XAuthExpense    money    default 0 not null
    , YAuthExpense    money    default 0 not null
    , ZAuthExpense    money    default 0 not null
    , TotalAuthExpense    money    default 0 not null

update EndOfMonthTable
set     TrueXCount = 0
    , TrueYCount    = 0
    , XAuthExpense = 0
    , YAuthExpense = 0
    , ZAuthExpense = 0
    , TotalAuthExpense = 0

In this case, I have multiple ways of resetting. I can completely delete the columns, rebuild them. Or I can simply zero out the columns. The point here is I have multiple ways to go back in time, depending on my needs. The actual names are obfuscated here, but the process is what is important, not how my employer does business.

This is very similar to the rule “you must use source control” and in the case of the actual code I have it stored in SourceSafe (not my choice, I might add, but it is better than no source control).


Here is a quick summary of the lessons learned from the case studies:

  1. Refactor business processes for commonalities. This is useful for programming, as you reduce errors. It is also useful for business, as one can determine the state the system was in when the data was stored. In the case of actual hardware, it allows you to troubleshoot problems and deploy fixes more efficiently.
  2. Store information returned from other systems. This is a very common issue in businesses I have consulted. The information is not considered important today, but becomes critical in times of crisis. Often times, getting this data at a later date is either very difficult (deployed hardware, as in the case study) or costly (company B charges to get the data back at a later date). In some cases, the other company may not store the data at all, so it may even be impossible to get the data after the initial process is complete.
  3. Flag data used in processes. This is basic auditing in action. The case in point was a month end report, but any time you consolidate information, you should flag the records used for the consolidation. While not mentioned in the case study, one possible need for the flag is if the process is determined to be flawed and needs to be rerun. If you have not flagged the data used, you will potentially make some assumptions that return different numbers.
  4. When a process changes data, you should set up a way to get back to square one. This is extremely critical as you develop the process, especially if you are testing on a large set of data.

Hope this helps!

Peace and Grace,

Twitter: @gbworld


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: