Monthly Archives: December 2012

Test Drivers

My first job in the late 80’s, early 90’s was working at a small start-up company, as part of a three-person development team. The product we were building was a Point-of-Sale (POS) scanning system, targeted at large, high-volume grocery store chains. I was responsible for the software that ran on the “front-end” register used by the cashier and developed this application in Turbo Pascal. The other two members of the team were responsible for the “back-end” store system, running UNIX and written in C.

At this time our company was breaking new ground; there were practically no PC-based grocery scanning systems in existence, and companies like IBM and National Cash Register (NCR) were just developing POS hardware that could run Microsoft’s DOS and software developed in high-level languages like Pascal and C. We practically had to design and build all of the software components ourselves – the only commercial library we used was for our database. We also had to develop our own polling protocol for the RS-485 network that connected the registers to the UNIX server. RS-485 is a serial hardware implementation similar to RS-232 but it allows “multi-drop”, i.e. multiple devices to be connected to the same wire at the same time. Each register was assigned a unique number and the backend server broadcast the registers’ numbers in turn over the network, and a register would respond with its request when its number was called.

The supermarket chain we were targeting as our first major customer had large stores that had upwards of twenty-five registers. It was critical that our network, back-end server, and front-end registers could cope with the transaction volume responsively and reliably. For a supermarket usually the only interaction a customer has with the staff is at the register, and so the check-out process has to be smooth and painless. We had signed an agreement to pilot our software and IBM’s hardware at a local store, and part of the plan was to send a busload of cashiers and a pallet of food to our office warehouse, to test the store system we had set up on folding tables. The system worked for the most part but we did have issues and a second round of manual testing was required. We did our first store installation and were successful; our small company won a $15M contract to install our POS software and IBM hardware in the rest of the supermarket chain.

Our existing testing process for a new release of software (and we did release often in the early days), was to have our customer support representatives and installers, at most five people, perform “ad-hoc” testing for a couple of hours. Our customer support team leader had worked in the supermarket industry before and would perform more systematic testing, but we were not testing the loading on the server and we had no repeatability in our test environment.

Several months later, after installing a handful of stores, we had reports of performance issues at one of the new sites. Our performance goal was to be able to poll each register, and if the cashier had scanned an item, return the item price and description, all within 0.3 seconds. My boss was considering setting up another store system in our warehouse, which required IBM shipping us loaner equipment, or packing up our development systems so we could camp out at the store and try to measure or recreate the performance issue.

At this time I carried a pager on week nights and weekends, and was getting “feedback” on the system issues our customers were experiencing. I had looked at the code and through inspection was able to find some possible defects, but what concerned me was that these defects were slipping through our testing. We had no way of testing the performance of the system on a large scale and our customer, Save-On-Foods, was also very concerned that our small company would not be able to fix the issues that were affecting its customers.

My solution for our testing gap was to create a “register simulator” that provided automation; I modified the register software to be able to respond to polls on the network for either itself or a group of registers and respond to the polls using input from a text file. I also added a “key-stroke logger” so that our support people could record their testing sessions as they created sales on the register, and the resulting text file could be copied, edited, and expanded. Now one register could impersonate a fleet of registers, and we had the beginnings of a repeatable test process.

With the new register simulator we were able to instrument our back-end server under load and solve the performance issue encountered by the newly-installed store. The register simulator also allowed us to tune our protocol for better responsiveness. We were polling each register in the store in a round-robin fashion, but depending on how busy the store was, many of the registers would be idle for long periods of time. One improvement was to create a “fast poll list” for active registers and a “slow poll list” for registers that were idle, and we would poll the fast poll list more frequently than the slow poll list. We were also able to simulate the effects of new message types on the network. For example, when I interfaced to the debit card networks, I found that to send all the data required by the bank in one packet required too much time and starved the other registers from receiving polls for their requests. I subsequently sent the debit card request in several packets.

While automated testing and simulation are not radical concepts, at the time there was no World Wide Web or Google to search for solutions. If the register was slow, it was my responsibility to lead the investigation. My innovation had a low cost of implementation suitable for a start-up company that needed to be agile, and helped to rapidly mature our ad-hoc testing process. With this innovation we were also able to solve some of the difficult performance issues that negatively affected a customer’s experience in the store, and regain our credibility with our own customer.