Tuesday, April 27, 2010

Security Screening: Bottleneck Analysis

Earlier Dawen wrote an article about her recent experience in security screening at Gatwick Airport. I thought this was an opportunity to demonstrate a simple process analysis tool which could be considered a part of Operations Research: Bottleneck Analysis.

At the airport, servers in the two-step security check process were un-pooled and thus dedicated to one another. By this, I mean that a security system with four staff checking boarding passes (step 1) and six teams at x-ray machines (step 2) were actually functioning as four separate units rather than as a team. Each unit had a boarding pass checker, two of the units had a single x-ray machine and the other two units had two x-ray machines. The consequence of this was that the one-to-one units overwhelmed their x-ray teams, forcing them to stop checking boarding passes and remaining idle. The one-to-two units were starved of passengers as the boarding pass checking could not keep up, resulting in idle x-ray machines.

We know that this configuration is costing them capacity. A very interesting question is: How much?

A Bottleneck Analysis is a simple tool for determining a system's maximum potential throughput. It says nothing about total processing time or the amount of passengers waiting in the system, but it does determine the rate at which screenings can be completed. Think of it as emptying a wine bottle upside down. Whether it's a half full bottle of molasses or a full bottle of wine, the maximum rate of flow is determined by the width of the neck (the bottleneck!). The maximum throughput rate of a system is equal to the throughput rate of its bottleneck.

The throughput of the current system is the limited by the bottleneck in each unit, each sub-system. In the case of the one-to-one units we know this is the x-ray machine, as they are unable to keep up with supply from upstream and are thus limiting throughput. In the case of the one-to-two units we know it is the boarding pass checker as the x-ray machines are waiting idly for new passengers and are thus limited. It follows that the maximum throughput for the combined system is two times the throughput of a single boarding pass checker plus two times the throughput of a single x-ray machine.

The natural reconfiguration that Dawen alludes to her in her article is one where the resources are pooled and the queues are merged. Rather than having two x-ray machines dedicated to a single boarding pass checker, passengers completing step 1 are directed to the x-ray machine with the shortest queue. In this way an x-ray machine is only idle if all four boarding pass checkers are incapable of supplying it a passenger and a boarding pass checker is only idle if all six x-ray machines are overwhelmed.

What is the throughput of this reconfigured system? The throughput is equal to the bottleneck of the system. This is either the four boarding pass checkers as a team if they are incapable of keeping the x-rays busy or the x-ray machines as a group because they are unable to keep up with the checkers. The bottleneck and thus maximum throughput is either equal to four times the throughput of a boarding pass checker (step 1) or six times the throughput of an x-ray machine (step 2), whichever is smaller.

Returning to the exam question, how much capacity is this miss-configuration costing them? At this point we need must resort to some mathematical notation or else words will get the better of us.

Readers uninterested in the mathematics may want to skip to the conclusion.

Let x be the throughput rate of an x-ray machine.
Let b be the throughput rate of a boarding pass checker.

The maximum throughput of the as-is system is thus 2x + 2b (see earlier).
If step 1 is the bottleneck in the reconfigured system then the max throughput is 4b.
If step 2 is the bottleneck of the reconfigured system then the max throughput is 6x.

If 4b <> 6x then step 2 is the bottleneck.

If we were managers working for the British Airport Authority (BAA) at Gatwick Airport our work would essentially be done. We could simply drop in our known values for b and x and reach our conclusion. For this article, though, we don't have the luxury of access to that information.

Returning to the exam question again, how can we determine what the cost of this miss-configuration is without knowing b or x?

We will employ a typical academic strategy:
Let b = αx or equivalently b/x = α.

If 4b <> 1.5 then the throughput of the new system is 6x.

The throughput of the as-is system is 2b + 2x = 2 α x + 2x.

The fraction of realized potential capacity in the as-is system is the throughput of the as-is system divided by the potential throughput of the reconfigured system.

If α < x =" 1/2"> 1.5 then it is (2 α x + 2 x) / 6x = 1/3 + α/3

What are the possible values of α? We know α is at least 1 because otherwise the x-ray machines in the one-to-one systems would not be overwhelmed by a more productive boarding pass checker. We know α is less than 2 or else the x-ray machines in the one-to-two systems would not have been idle.

We know have a mathematical expression for the efficiency of the current system:

f(α) = 1/2 + 1/(2 α) where 1 <= α <= 1.5 f(α) = 1/3 + α /3 where 1.5 <= α <= 2 But what does this look like?

Depending on the relative effectiveness of boarding pass checking and the x-ray machines, the current efficiency is as follows:


If α is 1 or 2, then the as-is system is at peak efficiency. If α is 1.5 we are at our worst case scenario and efficiency is 83.3% of optimal.

Conclusion

Based on the graph above, depending on the relative effectiveness of the boarding pass screeners and the x-ray machines (unknown), the system is running at between 83.3% and 100% efficiency. The most likely values is somewhere in the middle, so there is a very good chance that the configuration of the security system is costing them 10% of possible capacity. To rephrase that, a reconfiguration could increase capacity by as much as 20%, but probably around 11%. In the worst case a reconfiguration could allow for the reallocation of an entire x-ray team yielding significant savings.

As stated previously, a bottleneck analysis will determine the maximum throughput rate, but it says nothing about the time to process a passenger or the number of passengers in the system at any one time. We now know that this miss-configuration is costing them about 10% capacity, but there are other costs currently hidden to us. What is the customer experience currently like and how could it improve? Is the current system causing unnecessary long waiting times for some unlucky customers? Definitely. More advanced methods like Queuing Theory and Simulation will be necessary to answer that question, both tools firmly in the toolbox of Operations Research practitioners.




Related articles:
OR not at work: Gatwick Airport security screening (an observation and process map of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Wednesday, April 21, 2010

OR not at work: Gatwick Airport security screening

I fly through London Gatwick airport quite a bit, whose operation is managed by BAA (British Airport Authority). Usually, I'm quite pleased with my experience through the security screening. However, for my last flight on April 1st from Gatwick to Milano, I was quite intrigued by how poorly it was run. I didn't think it was an April Fool's joke. :) So, after I went through the lines, I sat down, observed, and took some notes.

This was how it was set up (click to enlarge).

To start with, Queue1a & Queue1b were quite long and slow moving. Basic queueing theory and resource pooling principles tell us that 1 queue for multiple servers is almost always better than separate queues for individual servers. Therefore, I was surprised to see 2 queues. Roughly 100+ people were waiting in these 2 queues combined. I waited for at least 15-20 minutes to get to the CheckBoardingPass server.

I wasn't bored though, because the second thing that surprised me was that within the same queue, one CheckBoardingPass server was processing passengers, while the other had to halt from time to time. It was because Queue2a was backed up to the server, while Queue2b&c were almost empty. After I saw how the x-rays were setup, it was easy to see that the unbalanced system was due to the 6 x-rays not being pooled together.

The effect was a long wait for all to start with in Queue1a&b, then some waited nothing at all (i.e. me) in Queue2b/c/d/e, while others waited in a lineup of 5-15 people in Queue2a/f. For the 4 CheckBoardingPass ladies, 2 of them were busier than the others, but all could feel the pressure and frustration from the passengers in Queue1a&b. For the staff manning the x-rays, this meant some were very busy processing passengers, while others were waiting for people to show up.

Also worth mentioning was that each x-ray was staffed by 5 persons: 1 before it to move the baskets and luggage towards the x-ray, 1 at it to operate the x-ray, 1 after it to move the luggage and baskets away from the x-ray, and 2 (1 male and 1 female) to search the passengers going through the gate, if they trigger the bleep. Seems very labour intensive. If they studied the arrival pattern of passengers needing to be searched, I wonder if it'd save some personnel here by pooling at least the searchers for a couple x-rays (if unions permit!).

We've had this type of problem cracked for some time now and it is surprising to see major problems still. Gatwick Airport / BAA was obviously doing quite well all the other times I've gone through. How easy it is for a good organisation to perform poorly just by ignoring a few simple queue setup rules. For example, in 2001, my master's program run by the Centre for Operations Excellence out in the University of British Columbia, in the lovely Vancouver, Canada, did a very good project with the local Vancouver International Airport (YVR) on just that. The project used simulation to come up with easy-to-follow shift rules for the security line-ups so that 90% of the passengers would wait for less than 10 minutes to go through. In fact, the project even caught the attention of the media, and was broadcasted on the Discovery Channel (how cool is that, and how fitting for OR work). Watch it here. Now come on, BAA, you can do better than this.


Related articles:
Security Screening: Bottleneck Analysis (a mathematical quantification of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Update (9 Oct 2010):
in this article, we erroneously stated that the airport operator was BAA (British Airports Authority). In fact, BAA was forced to sell Gatwick to please regulators seeking to break a monopoly on UK's airports. Our apologies to BAA. The current owners are Global Infrastructure Partners, who also owns 75% of the London City Airport.

Saturday, April 17, 2010

Hollywood stock exchange to become reality?

A year and a half ago, we wrote an article, Forecasting Hollywood movie box office revenue with HSX trading history, based on a talk by Natasha Foutz at the 2008 INFORMS Conference in Washington, DC.

Today I see in the news (Movie futures market approved) that trading of futures related to movies' box office success is about to become a reality. There may be some legal and political obstacles left to surmount, but there may yet be more data to work with in this line of research.

Curiously the article focuses on financial aspects of the new financial instruments rather than the consequences for Operations Research. Market liquidity and hedging by large and independent film financiers is a laudable goal, but think of the statistics!

I would be interested to know what sort of use movie theatres/cinemas could make of these predictions when making operational and strategic decisions regarding film selection and scheduling.