Saturday, December 11, 2010

Excellent Data Visualisation - Mortality Statistics Meets Modern Video Technology

Exciting statistics on visual display on BBC4. It indeed is an exciting, visually pleasing and modern video. The presenter, Hans Rosling, a statistician and a guru of data animation, makes numbers look matrix-cool! I thought I was watching a 4-minute magic show. Savour in the power of great data visualisation. Watch the life expectancy and wealth progression of 200 countries in 200 years in 4 minutes.

Monday, November 8, 2010

Smart Systems and Competent Systems

It amazes me how companies won't do the most basic things with their data. About once a quarter the company that rents us our flat solicits us by mail to sell the place. I just recycled a letter from our current broadband provider encouraging us to switch to them as they have better reliability and lower rates than the competition.

Surely there should be a database out there where a simple join between a residential addresses table and a current customers table would result in a mailing list that does not include me. I'm not sure what offends me more, the excess waste this represents not just in felled trees, but in the entire supply chain that delivers me this mail, or the simple incompetence that it represents.

The Economist has an interesting special report this week on Smart Systems. This report portrays a future where the rapidly progressing sensor, wireless communication and power/battery technologies converge to deliver endless data enabling us to analyse and optimise everything. Power grids, water works, and even cows are candidates for this new age of analytics. They could be exciting times for Operations Researcher practitioners. Early benefits will probably come from simple applications and may resemble the traditional benefits from IT and access to information. As a second level, Operations Research will be able to do more sophisticated things with the data, but when I see the examples I mentioned above, it can be possible to lose faith.

Saturday, October 30, 2010

Young OR Conference April 2011 - Consultancy Stream

The OR Society is hosting the biennial Young OR conference in the University of Nottingham, United Kingdom, on 5-7 April 2011. I am organising the Consultancy stream, and I am looking for speakers, presenters and of course audience. If you are disregarding the conference because of the word 'young', think again, because the definition of 'young' in this context is <= 10 years in the field of OR. You can find more information for presenters here. Essentially, this is what you need to do:
  • a 200 word abstract for the conference programme
  • a presentation of max 20 minutes


I described the stream as follows:

The consultancy stream aims to attract speakers and audience interested in sharing their experiences in the practical application of Operational Research in a client-consultant setting. The consultant can be internal or external to an organisation. The problem at hand can be simple or complex, technically or organisationally.

The challenges we face as OR consultants are very similar no matter the industry, the organisation or the problem at hand. There are definite gaps between practical application and academic research in OR, but it is still one of the most rewarding jobs. The recommended format would be a case study presentation covering the entire cycle of the project where possible, but presentation creativity is absolutely encouraged.

  • How did the problem find you or how did you find the problem? i.e. How was it sold?
  • Steps taken to establish your course of action
  • OR and non-OR techniques and methodologies used to structure and solve the problem
  • How were your findings and recommendations communicated to the stakeholders and decision makers in an effective way?
  • How did the client take your recommendations? Did they implement?
  • Finally, what do you enjoy most about your job?

Most of all, have fun and meet some fellow Operational Research practitioners.


Please pass on the message. Better yet, please drop me a line to present! As you can see, the stream description is very wide, encompassing all real life applications of OR. You don't have to have 'consultant' in your title, neither does your company or organisation. Come and share your experience and the fun (or pain?) of applying Operational Research in anything from ordinary day to day life to extraordinary situations of, for instance, life and death and taxes. How have you helped with better and more informed decision making?

Wednesday, October 13, 2010

Oyster Card Optimisation

Transportation is an industry where a lot of Operations Research is practiced. In the following article I would like to share an example of optimisation that I have noticed in the fare pricing system on the London Underground.

Public transportation in London, England has a convenient and efficient means of collecting fares from travellers. Introduced back in 2003, the Oyster Card is the size of a credit card and is pre-loaded with money by the traveller. On each trip they take, the traveller touches the oyster card to a reader, registering their journey with the system which deducts payment from their balance. Each single journey is charged at a different rate depending on the origin zone, destination zone, and time of day.

A daily capping system is in place such that you will never pay, in a day, more than the price of a day-pass covering all of your journeys for the day. For example, in a day where you only travel in zone 1 off-peak your journeys will cost £1.80, £1.80, £1.80, £0.20, £0, £0 and each journey after that is free, as you essentially now have a day-pass on your card when your daily cap has reach at £1.80*3 + £0.20 = £5.60.

A Canadian friend of mine, currently residing in Australia, visited me here in London the other weekend. Knowing the ease, convenience, and price-capping guarantee, I recommended that he get an Oyster Card. He loaded it up with £10 at Heathrow and came into town to drop his bags at my place. After a short jet-lag nap he headed out into the core to see the tourist sights, travelling frequently on the underground. At the end of the day he reported that his Oyster Card credit had run out and that he had needed to top up the balance. This surprised me, so we worked out his journeys and payments:
  • Zone 6 (Heathrow) to Zone 1 at Peak - £4.20
  • 6 x Zone 1 Off-Peak - £1.80 each

Because he travelled from Zone 6 to Zone 1 at peak, his cap for the day was £14.80 even though had he bought a Zone 1 day-pass at Heathrow he would have only paid £5.60 + £4.20 = £9.80. So the Oyster Card is convenient and comes with a price capping system, but there are holes in that system. In this case it cost him £5.00 which is about an hours work at minimum wage in the UK, so not trivial.

Any individual travelling on a public transportation network wants to perform an optimisation. In this case, they want to minimize their total cost by selecting the most efficient combination of fares to cover all of their journeys. This problem presents itself as a classic optimization problem; Subject to constraints, like the requirement to purchase tickets to cover all journeys, the goal is to minimize total cost, a function of the decisions to buy tickets. An optimisation problem like this can be formulated mathematically and solved by computers using a discipline called integer programming, one of the tools in the Operations Research practitioner's toolbox.

If this problem can be solved by computers, why doesn't the Oyster Card system provide a lowest price guarantee rather than the evidently imperfect price-capping system? Consider for a moment the requirements of the system:
  • Daily ridership of around 3 million
  • At the end of their journey, users must be told almost instantaneously what the cost was and what their remaining balance is

Optimisation problems of this nature are not always fast, easy, or even possible to solve optimally. The computers of today are fast, but there's plenty still beyond them. The tube system isn't even using the latest technology. I've been told that some Underground components still use punch cards! Every time a customer makes a journey this optimisation must be calculated and that must be done 3 million times a day and that is unfortunately too much.

When an optimisation problem is too big or too complex to solve directly and perfectly, analysts use something called heuristics to come up with near-optimal solutions. There are commonly used methods, but depending on the problem, customised heuristics can be developed, using the unique structure of the problem in question to produce a near-optimal result. That is exactly what the price capping system is; It is a heuristic used to make a good approximation of the lowest price.

There are effectively only two types of tickets in the system: single tickets and day passes. Day passes are the only way to save money. It is rarely worthwhile buying two separate day passes. It follows naturally that a simple rule of thumb for cost optimisation is to compare your daily total of single trips to the price of a day pass covering all those journeys and choose the lower option. The conditions that I list at the start of this paragraph are essential consequences of the structure of the problem, and we can exploit them to arrive at our simple heuristic, the same one that the oyster cards use.

In a future article I hope to look into formulating the optimisation problem of the London Underground and consider alternative heuristics.

Saturday, October 9, 2010

Expedia Revenue Management at Check-out or Rule Compliance

We have all been shopping online for something only to be told after making the purchase decision that it is no longer available or no longer available at that price. This often happens when buying flights, as prices can change minute-to-minute and you can be left with a much higher ticket price which makes you abandon your purchase. Disappointment all around.

However, the opposite happens from time to time as well! The price of a London to Seattle flight, when I found it was £649.07 (including all fees). I clicked to start jumping through all the purchase hoops, but after a couple steps into the check-out process, it flagged up, rather alarmingly, as £616.07. That's a 5% decrease in price. (See, I'm not making it up!)





I was pleasantly surprised, of course. But why would they do that?

I've got 2 suspicions.

1. Revenue Management / Yield Management / Consumer Psychology
In the weeks prior to this screen capture, I've been to the site a few times already looking for the exact same flight. Even though I'm not logged in, I'd venture to guess that the site has looked up my cookies and knew that I've been looking for these flights. Therefore, it should know that I was a likely buyer, rather than a window shopper (pc pun intended). I've been at the check-out stage before, but have abandoned the shopping cart eventually. It would be quite logical for the site to entice me with a lower price as a 'pleasant surprise' to finally get me to spill my moola. Not to mention the positive impression it's left with the shopper (look what I'm doing now - free advertising!).

However, is it worth the 5% price drop? How does Expedia decide 5% was the right balance of customer incentive and revenue loss? I was already a willing customer, ready to bite. Isn't it just giving the 5% away for free? In my case, it's difficult to say whether the move has gained my loyalty to Expedia, because I was already a frequent visitor and buyer there. It may have re-enforced my loyalty though. It would be very interesting to analyse a few year's purchase and cart abandonment data of customers where this has happened to, versus a control group. Would we observe a lower purchase completion rate, which would drive a higher lifetime revenue per customer?

2. Airline price adjustment rule compliance
There could exist such a regulatory rule in the online airline pricing world to protect consumers, such that the vendor must notify the buyer of last minute price changes before the final purchase is completed. Now, I don't know if such a rule exists, but it is possible. However, it sounds extremely difficult for the regulators to enforce and monitor compliance.

I personally think it's more the former than the latter. One way to test the real reason behind the price drop could be to see if it's always a 5% decrease. Time to do some more flights window shopping!



P.S. In a previous article where we observed operational inefficiencies at London's Gatwick Airport, we erroneously stated that the airport operator was BAA (British Airports Authority). In fact, BAA was forced to sell Gatwick to please regulators seeking to break a monopoly on UK's airports. Our apologies to BAA. The current owners are Global Infrastructure Partners, who also owns 75% of the London City Airport.



Update:
Responding to two unconstructive comments, one of which was downright rude and was deleted, we thought we would add to this article.

The commenters suggest that Expedia is not a price setter, but just a re-seller making possibility one above unlikely. That said, the question still stands, "What's going on here?". If the prices that Expedia gives you when you search are cached and not live, that seems to be to be a surprising shortcoming. If they are, why offer a lower price to someone who appears to have already made the decision to purchase?

There are probably a number of factors at play that someone from the online travel community could answer.

If I were reselling through Expedia, I would want my price-updating algorithm to give the higher of the two prices at the point of payment, i.e. more profit. Both Expedia and the vendor are motivated to collect a higher price and therefore a higher commission as a percentage of the selling price.

The commenters may be very correct in saying that Expedia doesn't set the price, but merely re-sells at whatever the price the vendor names. That's why we said there were two possibilities, the second being not revenue management. However, if Expedia is not practicing revenue management in this way, they probably should at least experiment with it. Their commission represents a headroom within which they can optimize and the goal, after all, is not to make the greatest profit on each sale, but instead the greatest profit across all possible sales.

Wednesday, September 15, 2010

Restaurant Systems Dynamics - Influence Diagrams

Systems Dynamics is a discipline that floats about in the management science/management consulting ecosystem. It is genetically related to Systems Thinking, though Systems Thinking contains much more, but no aspect of simulation. The two most important aspects of Systems Dynamics are influence/causal diagrams and continuous simulation. Today I would like to outline an example of the use of influence diagrams to study a simple system, gain strategic insight, and form the basis of a stock and flow continuous simulation.

I was in Paris the other weekend, looking for a restaurant for Sunday lunch. Finding a good restaurant as a tourist is always difficult because tourist restaurants just aren't very good. The restaurants in my neighbourhood in London rely a lot on repeat business and referrals from friends and engage in a repeated interaction with their customers. The restaurants in touristy areas on the other hand get the majority of their business based on location. My local restaurant wants to delivery value for money so that I or my friends will come again. The restaurant in Venice never expects to see me again and is motivated to give me the lowest value for money to maximize profit. We have an example here of repeated and non-repeated games, but this is not an article about game theory.

As regular travellers, we have a strategy for finding the right place. There are a number of aspects to that strategy, but the one I want to highlight today is: Find busy restaurants. We are by no means the only people employing this strategy, as it is clear that busyness should be an indication of quality.

Where is this all going? I'm telling this story because I want to use an influence diagram to study restaurants in general, study touristy restaurants in particular and gain strategic insight from that. Influence diagrams are used to study the interactions in a system, particularly the between key strategic resources. In the case of our restaurants these will be:
  • Customers occupying tables
  • Customers queuing for tables
  • Perceived restaurant quality
  • Available customers


Figure 1. Simple Tourist Restaurant Influence Diagram

The make-up of an influence diagram is relatively simple:
  • Strategic resources, flows or other system variables
  • Arrows indicating one influencing another
  • An indication of a positive influence or negative influence
  • Optionally indications of re-enforcing and balancing loops

Consider Figure 1 above, the influences shown are as follows:
  • As the number of "New Customers Arriving" increases, the number of "Customers Occupying Tables" increases
  • As the number of "Customers Occupying Tables" increases, the "Perceived Restaurant Quality" increases
  • As the "Perceived Restaurant Quality" increases, the "New Customers Arriving" increases
  • As the number of "Customers Occupying Tables" increases, the "Length of Queue for Seating" increases
  • As the "Length of Queue for Seating" increases people will be discouraged and it will reduce the number of "New Customers Arriving"
  • As the number of "New Customers Arriving" increases, the number of "Available Customers" decreases
  • As the number of "Available Customers" decreases, the number of "New Customers Arriving" decreases

Re-enforcing loops can be exploited to achieve exponential growth and profit, but can also cause exponential collapse and bankruptcy. Balancing loops are often related to limited resources which limit what we can achieve, but also serve to mitigate damage.

Loop B1 is a balancing loop: As more customers choose to enter our restaurant, the total number of potential customers is diminished, thus reducing the flow of new customers. This puts a natural limit on our business, the number of potential customers.

Loop B2 is a balancing loop: As more customers arrive, our tables experience a higher and higher occupancy and customers must wait in a queue either for other customers to leave or for dirty tables to be turned over. Here is another resource constraint on our system: capacity.

Loop R1 is a re-enforcing loop: More customers leads to an increased perception of quality which then leads to more customers. This is they key re-enforcing loop that we should study further.

The key strategic conclusion that can be drawn form studying this influence diagram comes out of loop R1, the re-enforcing loop. The consequence of this loop is that full restaurants tend to stay full and empty restaurants tend to stay empty. Given that each restaurant starts empty each day, the key challenge appears to be in first becoming not empty. Easier said than done.

Restaurants and bars have a number of ways of achieving this. The first, but least interesting, is simply good quality. A regular customer base or recommendations in guide books will provide the seed customers from which a full house can grow. Alternatively, we need some other means of getting people in the door. This makes me think of my time in Turkey on the Mediterranean coast. Walking along the waterfront in a tourist town, a restaurant owner offered me a half-priced beer as long as I would sit along the front edge of his balcony. If this makes you think of happy hour there's probably a good reason.

I will admit that the "strategic insights" discussed above with respect to the restaurant industry are not earth moving, profound, or even unexpected. However, this article provides a simple real-world example of a dynamic system, and demonstrates the concept nicely. Had we not already known that full restaurants stay full and empty restaurants stay empty, going through this exercise could have revealed that to us.

The next step would be to design a simulation based on the influence diagram, something that I will endeavour to do in a future article.

Wednesday, September 1, 2010

What motivates us the most

First let me make clear that I am talking about the motivation in workplace. In personal life it's easy - in first half of our life it's the Sex, in second half it's the Comfort. (So to speak with tongue in cheek)

But the workplace motivation is more intriguing. And that is the area that every OR specialist should always keep in the forefront of their mind - the questions and aspects of human motivation. Here's an excellent animated video derived from the talk of one Dan Pink at RSA. Seems that Mr. Pink also excels in self-motivation, since this lecture is a small masterpiece.
True, these research findings are popping up here and there for the last two decades, at least, and lots of companies are adopting some of those principles, however this short video sums it up in excellent concise way. Enjoy!



However, I personally think that all these findings are missing some essential qualifications. I thinks that it reflects the motivation of people in developed countries, where there is no hunger and war is something nobody really remembers.
To echo the words of Mika Waltari in his book Egyptian Sinuhe, where he describes one lucky country he travels through, "...and the people who knew neither hunger no war, were already in middle age...".
I wonder, how the same research would turned out in war torn Angola, or Iraq.
I suspect that this type of "Make the world a better place" altruism grows best in economically nutritious Petri dish - relatively wealthy society. But what do I know about the poor countries. Maybe they would surprise us the most. The world is changing after all. It's the Internet age now.

One observation I made about the phenomenon of people working in their free time for free. (Linux developers, etc.) First I would liken it to simple hobby-ism. And I think that it indeed has the roots in hobbies. Everybody at some time in their life likes to build some "model airplane" and see it fly. But, and here comes my observation, they would like more to see it soar, than just fly. In other words, people don't mind to work for free on somebody's else project (i.e. Linux), but they prefer to jump on winning bandwagon. The likelihood of overall impact (let's even say world wide impact) is a specific motivation on its own.

It's the Internet age now.

Sunday, August 1, 2010

Want to be creative? Don't brainstorm.

I'm sure many of us have had the thought before, "Oh, I wish I were more creative". I have. I'm also sure many, many of us have led or participated in a brainstorming session before. I also have. Apparently, these are both counter productive to being more creative, according to this article. To top it off, apparently since 1958 it has been proven that brainstorming doesn't work. I never knew about this. Did you?



In businesses, one of the common outcomes of operational research work is improving a particular process. We often start with understanding the problem, to mapping the process, and to building a model that reflects the current process. Eventually, to add value to the bottom-line, the model hopefully reveals some insights, and is the tool to test out certain ideas to support any process changes. Personally, it is often a pleasure to be involved from the beginning of problem understanding to the end, managing the recommended process changes, because as an OR consultant, you get to see your work to fruition.

Of course, to succeed in change management, the ideas should come from the stakeholders who live and breathe the process in question, and eventually own the solutions to be implemented. To get ideas from stakeholders for process improvement, the common technique is to gather stakeholders in a room and brainstorm on possible solutions.

Given the information presented in the article, to prepare for the brainstorming session, I take away that we should consider to:
  • present the problem to the group before the brainstorming session,
  • ask them to prepare and think about possible solutions that their colleagues or friends wouldn't have thought of for resolving the issues,
  • get them back in a room to discuss each other's ideas and prioritise on the ones to investigate feasibility and impact,
  • but before they start discussions, get them do some aerobics for 30 minutes if they are somewhat fit (half serious, but wouldn't that be fun?),
  • culture them with a youtube video about the weird and cool stuff in other countries (half serious, but wouldn't that lighten up the mood?),
  • facilitate the session with careful language to not instruct people to be creative
  • facilitate the session so the group moves back and forth between a couple topics to be able to take a break from focussing on just one solution
  • and perhaps not name it a brainstorming session, because it may be the forum that people associate with "get creative...now", which is counter productive, as per the article
Read it if you've got 3 minutes. Let me know what you take away from it that I've missed. (See the instant application?) The main points in the article to help someone to be more creative are:
  • Don't tell them to be creative
  • Get moving
  • Take a break
  • Reduce screen time
  • Explore other cultures
  • Follow a passion
  • Ditch the suggestion box

Tuesday, July 13, 2010

What qualifies as a Simulation Model?

A theme that has been running through my career since my Master's project has been the question of measuring complexity in modelling and simulation. When can one proclaim to have built a simulation model and when is one glorifying simple analysis?

In the Operations Research ecosystem the tendency is certainly to inflate. Salesmen, curriculum vitae authors, recruiters and consultancies across the spectrum are all motivated to embellish the work that they do and work that is done. Like any scientific individual I seek to slice through the static, inform myself as to who is doing extraordinary work, and to build myself a framework from which I can safely criticize the inflations of others.

I have been working on a set of rules for separating "models" into models, calculations and simulations. I feel like there is a gaping opportunity here for contribution from complexity, chaos, and other disciplines in Computer Science and Mathematics, but here's what I've put together thus far:

Simulations are models, but not all models are simulations. Calculations are not models.

Models
  1. A model is a simplified representation of a system.
  2. All models are wrong, but some models are useful
Calculations
  1. The result of a calculation can be expressed in a single equation using relatively basic mathematical notation.
  2. Where calculations contain an time element, values at different times can be determined in any order without referring to previous values.
Simulations
  1. A simulation is a calculation in which one parameter is the simulation clock that increments regularly or irregularly.
  2. The outcome of a simulation could not have been determined without the use of the clock.
  3. While an initial state is typically defined, an intermediate state at a given time should be difficult or impossible to determine without having run the simulation to that point.
  4. Almost any model that involves repeated samples of random numbers should be classified as a simulation.
Consider the following progression of "models" that output an expected total savings:
  1. Inputs: Expected total savings.
  2. Inputs: Annual savings by year, time-frame of analysis.
  3. Inputs: Annual savings per truck per year, number of trucks by year, time-frame of analysis.
  4. Inputs: Annual savings per truck per year, current number of customers, number of trucks per customer, annual increase in customers, time-frame of analysis
  5. Inputs: Annual savings per truck per year, current number of customers by geographical location, annual increase in customers by geographical location, routing algorithm to determine necessary trucks, time-frame of analysis.
  6. Inputs: Annual savings per truck per year, current number of customers by geographical location, distribution of possible growth in customers by geographical location, routing algorithm to determine necessary trucks, time-frame of analysis.
As you can see, complexity builds and eventually passes a threshold where we would accept it as a model. "Model" 4 is still little more than a back of the envelope calculation, but Model 5 takes a quantum leap in complexity with the introduction of the algorithm. Model 5 however I would still not classify s a simulation, because any year could be calculated without having calculated the others. Finally Model 6 introduces a stochastic variable (randomness) that compounds from one year to another and brings us to a proper simulation.

I've seen calculations masquerading as simulations models at a Fortune 500 company both internally and externally. While the result is the same: outcomes determined from data where validity is asserted by the author, I know that Operational Research practitioners reading this will appreciate my desire to classify. At the very least it will help us separate what the MBAs do with spreadsheets from our own work.

I welcome input from others on this topic, as I am only just developing my own theories.

Sunday, June 27, 2010

Travel, being an OR consultant, and another blog

Activities on the ThinkOR blog has been a bit thin in the last month or so. Summer has arrived and we have been busy enjoying it as much as we can in London. So far it's been a great half year: Exeter UK, Istanbul, Bursa, Ayvalık, Bergama (Pergamon) TR, Riga LV, Berlin DE, Milan, Venice, Padua, Verona IT, the Algarve PT, New Delhi, Agra, Udaipur IN, Bahrain BH, Malaga ES, Reykjavik IS, and of course Canada and the US. Not bad, eh?

Travel:

To travel this much for leisure (18 countries last year), and to cover as many interesting cities as possible that span the continents, i.e. objectives; to not break the bank, to use as few vacation days as possible (we've only used 9 so far), to avoid anticipated bad weather, to not leave work too early for flights, and to not overdo it to tire ourselves out, i.e. constraints; means that we need an optimised strategy. We travel on weekends and use bank holidays as much as possible. We travel budgetly with lean (polite for 'cheap') airlines like EasyJet and Ryanair flying out after 5pm on a Friday, to trade off between more time in the destination and the cost of 1 extra night of hotel, as well as a peak rate for flights after 5pm. We make a judgment on the trade off between the central location of hotels with the higher cost usually associated. We also need to do our research on the temperature and the likelihood of rain for the cities on our list, and line the cities up with the weekends we would like to travel, but our list is often dictated and changed by the destinations of the airlines and the routes on sale. Our part time job is a travel agent, because it is quite time consuming. However, we usually plan a couple months in a batch process, and don't need to think about it again once it's in the diaries. It's kind of fun planning it, and more fun zipping away every second or third weekend.

Being an OR consultant

I just started a new job at Capgemini Consulting's operational research team. Already did one project with a major consumer product manufacturing and distribution company. Very interesting project, in which I enjoyed working on modelling their supply chain and the cash to cash cycle, and the impact of one seemingly simple decision's impact on the bottom line. This is exactly what OR is for - helping businesses make more informed decisions. The project was quite short and intense. I feel like one of the most important attributes OR people bring to the table in situations like this is what and when you can use averages, what assumptions are ok and what would come back and bite you in the butt. Perfection mostly takes second seat to delivery deadlines. It reminded me of what an advisor told me at uni, "what you learn at school will get applied very little in real life, because businesses never have the time to give to an OR guy to properly figure out the problems and solutions. They want quick answers and they want it now."

Another OR blog

Capgemini has a very cool group of OR people, and they have an OR blog too! Figure it Out. Check it out. Interesting articles on the real life applications of operational research, particularly relevant to UK topics. Of course, I will be writing for them too, as soon as I acclimate a little bit.

P.S. We at ThinkOR are very honoured to be named as one of the favourites in the OR blog world by Maximize Productivity with IE & OR Tools. Thank you very much. It is a real honour. Please let us know any topics you'd like to read about more, and we will try our best to research and write about them.

Thursday, May 13, 2010

Security Screening: Discrete Event Simulation with Arena

Simulation is a powerful tool in the hands of Operations Research practitioners. In this article I intend to demonstrate the usage of a discrete event process simulation, extending on the bottleneck analysis I wrote about previously.

A few days ago I wrote an article demonstrating how you could use bottle neck analysis to compare two different configurations of the security screening process at London Gatwick Airport. Bottleneck analysis is a simple process analysis tool that sits in the toolbox of Operations Research practitioners. I showed that a resource-pooled, queue-merged process might screen as many as 20% more passengers per hour and that the poor as-is configuration was probably costing the system something like 10% of its potential capacity.

The previous article would be good to read before continuing, but to summarize briefly: Security screening happens in two steps, beginning with a check of the passenger's boarding pass followed by the x-ray machines. Four people checking boarding passes and 6 teams working x-ray machines were organized into 4 sub-systems with a checker in each system and one or two x-ray teams. The imbalance in each system was forcing a resource to be under utilised, and Dawen quite rightly pointed out that by joining the entire system together as a whole such that all 6 x-ray machines effectively served a queue fed by all 4 checkers, a more efficient result could be achieved. We will look at these two key scenarios, comparing the As-Is system with the What-If system.

The bottleneck analysis was able to quantify the capacity that is being lost due to this inefficiency, but as I alluded, this was not the entire story. Another big impact of this is on passenger experience. That is, time spent waiting in queues in the system. In order to study queuing times, we turn to another Operations Research tool: Simulation, specifically Process-Driven Discrete Event Simulation. Note: There may be an opportunity to apply Queuing Theory, another Operations Research discipline, but we won't be doing that here today.

Discrete Event Simulation

Discrete Event Simulation is a computer simulation paradigm where a model is made of the real world process and the key focus is the entities (passengers) and resources (boarding pass checkers and x-ray teams) in the system. The focus is on discrete, indivisible things like people and machines. "Event" because the driving mechanism of the model is a list of events that are processed in chronological order, events that typically spawn new events to be scheduled. An alternative driving mechanism is with set timesteps as in system dynamics, continuous simulations. Using a DES model allows you to go beyond the simple mathematics of bottleneck analysis. By explicitly tracking individual passengers as they go through the process, important statistics can be collected like utilisation rates and waiting times.

During my masters degree, the simulation tool at the heart of our simulation courses was Arena from Rockwell Automation, so I tend to go to it without even thinking. I have previously used Arena in my work for Vancouver Coastal Health, simulating Ultrasound departments and there are plenty of others associated with the Sauder School of Business using Arena. Example. Example. Arena is an excellent tool and I've used it here for this artilce. I hope to test other products on this same problem in the future and publish a comparison.

In the Arena GUI you put logical blocks together to build the simulation in the same way that you might build a process map. Intuitively, at the high level, an Arena simulation reads like a process map when in actuality the blocks are building SIMAN code that does the heavy lifting for you.

The Simulation

Here's a snapshot of the as-is model of the Gatwick screening process that I built for this article:


Passengers decide to go through screening on the left, select the boarding pass checker with the shortest queue, are checked, proceed to the dedicated x-ray team(s) and eventually all end up in the departures hall.

An X-Ray team is assumed to take a minute on average to screen each passenger. This is very different from taking exactly a minute to screen each passenger. Stochastic (random) processing times are an import source of dynamic complexity in queuing systems and without modelling that randomness you can make totally wrong conclusions. For our purposes we have assumed an exponentially distributed processing time with a mean of 1 minute. In practice we would grab our stop-watches and collect the data, but we would probably get arrested for doing that as an outsider. Suffice it to say that this is a very reasonable assumption and that exponential distributions are often used to express service times.

As in the previous article, we were uncertain as to the relationship between throughput of boarding pass checkers and throughput of x-ray teams. We will consider three possibilities where processing time for the boarding pass checker is exponentially distributed with an average of: 60 seconds (S-slow), 40 seconds (M-medium), 30 seconds (F-fast) (These are alpha = 1, 1.5 and 2 from the previous article). In the fast F scenario, our bottleneck analysis says there should be no increased throughput What-If vs. As-Is because all x-ray machines are fully utilised in the As-Is system. In the slow S scenario there would similarly be no throughput benefit because all boarding pass checkers would be fully utilised in the As-Is system. Thus the medium M scenario is our focus, but our analysis may reveal some interesting results for F and S.

We're focused here on system resources and configuration and how they determine throughput, but we can't forget about passenger arrivals. The number of passengers actually requiring screening is the most significant limitation on the throughput of the system. I fed the system with six passengers per minute, the capacity of the x-ray teams. This ensured both that the x-ray teams had the potential to be 100% utilised and that they were never overwhelmed. This ensured comparability of x-ray queuing time.

I ran 28 (four weeks) replications of the simulation and let each replication run for 16 hours (working day). We need to run the simulation many times because of the stochastic element. Since the events are random, a different set of random outcomes will lead to a different result, so we must run many replications to study the possible results.

Also note that I implemented a rule in the as-is system, that if more than 10 passengers were waiting for an x-ray team the boarding pass checker would stop processing passengers for them.

Results

Scenario M - Throughput Statistics


First let's look at throughput. On average, over 16 hours the what-if system screened 18.9% more passengers than as-is. The statistics in the table are important. Stochastic simulations don't given a single, simple answer, but rather a range of possibilities described statistically. The average for 4 weeks is given in the table, but we can't be certain that would be the average over an entire year. The half width tell us our 90% confidence range. The actual average is probably between one half-width below the average and one above.

Note: I would like to point out that this is almost exactly the result predicted analytically with the bottleneck analysis. We predicted that in this case the system was running at 83.3% capacity and here we show As-Is throughput is 4728.43/5621.57 of What-If throughput = 84.1%. The small discrepancy is probably due to random variation and the warm-up time from the simulation start.

But what has happened to waiting times?


The above graph is a cumulative frequency graph. It reads as follows: The what-if value for 2 minutes is 0.29. This means that 29% of passengers wait less than 2 minutes. The as-is value for 5 minutes is 0.65. This means that 65% of passengers wait less than 5 minutes.

Comparing the two lines we can see that, while we have achieved higher throughput, customers will now have a higher waiting time. Management would have to consider this when making the change. Note that the waiting time increased because the load on the system also increased. What happens if we hold the load on the system constant? I adjusted the supply of passengers so that the throughput in both scenarios is the same, and re-ran the simulation:


Now we can see a huge difference! Not only does the new configuration outperform the old in terms of throughput, it is significantly better for customer waiting times.

What about our slow and fast scenarios? We know from our bottle-neck analysis that throughput will not increase, but what will happen to waiting times?


Above is a comparison between as-is and what-if for the fast scenario. The boarding pass checkers are fast compared to the x-ray machines, so in both cases the x-ray machines are nearly overwhelmed and the waiting time is long. Why do the curves cross? The passengers that are fortunate enough to pick a checker with two x-ray machines behind them will experience better waiting times due to the pooling and the others experience worse.

This is a bit subtle, but an interesting result. In this scenario there is no throughput benefit from changing, there is no average waiting time benefit from changing, but waiting times are less variable.


Finally, we can take a quick glance at our slow S scenario. We know again from our bottleneck analysis that there is no benefit to be had in terms of throughput, but what about waiting times? Clearly a huge differenence. The slow checkers are able to provide plenty of customers for the single x-ray teams, but are unable to keep the double teams busy. If you're unlucky you end up in a queue for a single x-ray machine, but if you're luck you are served immediately by one of the double teams.

Summary

To an Operations Research practitioner with experience doing discrete event simulation, this example will seem a bit Mickey Mouse. However, it's an excellent and easily accessible demonstration of the benefits one can realize with this tool. A manager whose bottleneck analysis has determined that no large throughput increase could be achieved with a reconfiguration might change their mind after seeing this analysis. The second order benefits, improved customer waiting times, are substantial.

In order to build the model for this article in a professional setting you would probably require Arena Basic Edition Plus, as I used the advanced feature of output to file that is not available in Basic. Arena Basic goes for $1,895 USD. You could easily accomplish what we have done today with much cheaper products, but it is not simple examples like this that demonstrate the power of products like Arena.



Related articles:
OR not at work: Gatwick Airport security screening (an observation and process map of the inefficiency)
Security Screening: Bottleneck Analysis (a mathematical quantification of the inefficiency)

Tuesday, April 27, 2010

Security Screening: Bottleneck Analysis

Earlier Dawen wrote an article about her recent experience in security screening at Gatwick Airport. I thought this was an opportunity to demonstrate a simple process analysis tool which could be considered a part of Operations Research: Bottleneck Analysis.

At the airport, servers in the two-step security check process were un-pooled and thus dedicated to one another. By this, I mean that a security system with four staff checking boarding passes (step 1) and six teams at x-ray machines (step 2) were actually functioning as four separate units rather than as a team. Each unit had a boarding pass checker, two of the units had a single x-ray machine and the other two units had two x-ray machines. The consequence of this was that the one-to-one units overwhelmed their x-ray teams, forcing them to stop checking boarding passes and remaining idle. The one-to-two units were starved of passengers as the boarding pass checking could not keep up, resulting in idle x-ray machines.

We know that this configuration is costing them capacity. A very interesting question is: How much?

A Bottleneck Analysis is a simple tool for determining a system's maximum potential throughput. It says nothing about total processing time or the amount of passengers waiting in the system, but it does determine the rate at which screenings can be completed. Think of it as emptying a wine bottle upside down. Whether it's a half full bottle of molasses or a full bottle of wine, the maximum rate of flow is determined by the width of the neck (the bottleneck!). The maximum throughput rate of a system is equal to the throughput rate of its bottleneck.

The throughput of the current system is the limited by the bottleneck in each unit, each sub-system. In the case of the one-to-one units we know this is the x-ray machine, as they are unable to keep up with supply from upstream and are thus limiting throughput. In the case of the one-to-two units we know it is the boarding pass checker as the x-ray machines are waiting idly for new passengers and are thus limited. It follows that the maximum throughput for the combined system is two times the throughput of a single boarding pass checker plus two times the throughput of a single x-ray machine.

The natural reconfiguration that Dawen alludes to her in her article is one where the resources are pooled and the queues are merged. Rather than having two x-ray machines dedicated to a single boarding pass checker, passengers completing step 1 are directed to the x-ray machine with the shortest queue. In this way an x-ray machine is only idle if all four boarding pass checkers are incapable of supplying it a passenger and a boarding pass checker is only idle if all six x-ray machines are overwhelmed.

What is the throughput of this reconfigured system? The throughput is equal to the bottleneck of the system. This is either the four boarding pass checkers as a team if they are incapable of keeping the x-rays busy or the x-ray machines as a group because they are unable to keep up with the checkers. The bottleneck and thus maximum throughput is either equal to four times the throughput of a boarding pass checker (step 1) or six times the throughput of an x-ray machine (step 2), whichever is smaller.

Returning to the exam question, how much capacity is this miss-configuration costing them? At this point we need must resort to some mathematical notation or else words will get the better of us.

Readers uninterested in the mathematics may want to skip to the conclusion.

Let x be the throughput rate of an x-ray machine.
Let b be the throughput rate of a boarding pass checker.

The maximum throughput of the as-is system is thus 2x + 2b (see earlier).
If step 1 is the bottleneck in the reconfigured system then the max throughput is 4b.
If step 2 is the bottleneck of the reconfigured system then the max throughput is 6x.

If 4b <> 6x then step 2 is the bottleneck.

If we were managers working for the British Airport Authority (BAA) at Gatwick Airport our work would essentially be done. We could simply drop in our known values for b and x and reach our conclusion. For this article, though, we don't have the luxury of access to that information.

Returning to the exam question again, how can we determine what the cost of this miss-configuration is without knowing b or x?

We will employ a typical academic strategy:
Let b = αx or equivalently b/x = α.

If 4b <> 1.5 then the throughput of the new system is 6x.

The throughput of the as-is system is 2b + 2x = 2 α x + 2x.

The fraction of realized potential capacity in the as-is system is the throughput of the as-is system divided by the potential throughput of the reconfigured system.

If α < x =" 1/2"> 1.5 then it is (2 α x + 2 x) / 6x = 1/3 + α/3

What are the possible values of α? We know α is at least 1 because otherwise the x-ray machines in the one-to-one systems would not be overwhelmed by a more productive boarding pass checker. We know α is less than 2 or else the x-ray machines in the one-to-two systems would not have been idle.

We know have a mathematical expression for the efficiency of the current system:

f(α) = 1/2 + 1/(2 α) where 1 <= α <= 1.5 f(α) = 1/3 + α /3 where 1.5 <= α <= 2 But what does this look like?

Depending on the relative effectiveness of boarding pass checking and the x-ray machines, the current efficiency is as follows:


If α is 1 or 2, then the as-is system is at peak efficiency. If α is 1.5 we are at our worst case scenario and efficiency is 83.3% of optimal.

Conclusion

Based on the graph above, depending on the relative effectiveness of the boarding pass screeners and the x-ray machines (unknown), the system is running at between 83.3% and 100% efficiency. The most likely values is somewhere in the middle, so there is a very good chance that the configuration of the security system is costing them 10% of possible capacity. To rephrase that, a reconfiguration could increase capacity by as much as 20%, but probably around 11%. In the worst case a reconfiguration could allow for the reallocation of an entire x-ray team yielding significant savings.

As stated previously, a bottleneck analysis will determine the maximum throughput rate, but it says nothing about the time to process a passenger or the number of passengers in the system at any one time. We now know that this miss-configuration is costing them about 10% capacity, but there are other costs currently hidden to us. What is the customer experience currently like and how could it improve? Is the current system causing unnecessary long waiting times for some unlucky customers? Definitely. More advanced methods like Queuing Theory and Simulation will be necessary to answer that question, both tools firmly in the toolbox of Operations Research practitioners.




Related articles:
OR not at work: Gatwick Airport security screening (an observation and process map of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Wednesday, April 21, 2010

OR not at work: Gatwick Airport security screening

I fly through London Gatwick airport quite a bit, whose operation is managed by BAA (British Airport Authority). Usually, I'm quite pleased with my experience through the security screening. However, for my last flight on April 1st from Gatwick to Milano, I was quite intrigued by how poorly it was run. I didn't think it was an April Fool's joke. :) So, after I went through the lines, I sat down, observed, and took some notes.

This was how it was set up (click to enlarge).

To start with, Queue1a & Queue1b were quite long and slow moving. Basic queueing theory and resource pooling principles tell us that 1 queue for multiple servers is almost always better than separate queues for individual servers. Therefore, I was surprised to see 2 queues. Roughly 100+ people were waiting in these 2 queues combined. I waited for at least 15-20 minutes to get to the CheckBoardingPass server.

I wasn't bored though, because the second thing that surprised me was that within the same queue, one CheckBoardingPass server was processing passengers, while the other had to halt from time to time. It was because Queue2a was backed up to the server, while Queue2b&c were almost empty. After I saw how the x-rays were setup, it was easy to see that the unbalanced system was due to the 6 x-rays not being pooled together.

The effect was a long wait for all to start with in Queue1a&b, then some waited nothing at all (i.e. me) in Queue2b/c/d/e, while others waited in a lineup of 5-15 people in Queue2a/f. For the 4 CheckBoardingPass ladies, 2 of them were busier than the others, but all could feel the pressure and frustration from the passengers in Queue1a&b. For the staff manning the x-rays, this meant some were very busy processing passengers, while others were waiting for people to show up.

Also worth mentioning was that each x-ray was staffed by 5 persons: 1 before it to move the baskets and luggage towards the x-ray, 1 at it to operate the x-ray, 1 after it to move the luggage and baskets away from the x-ray, and 2 (1 male and 1 female) to search the passengers going through the gate, if they trigger the bleep. Seems very labour intensive. If they studied the arrival pattern of passengers needing to be searched, I wonder if it'd save some personnel here by pooling at least the searchers for a couple x-rays (if unions permit!).

We've had this type of problem cracked for some time now and it is surprising to see major problems still. Gatwick Airport / BAA was obviously doing quite well all the other times I've gone through. How easy it is for a good organisation to perform poorly just by ignoring a few simple queue setup rules. For example, in 2001, my master's program run by the Centre for Operations Excellence out in the University of British Columbia, in the lovely Vancouver, Canada, did a very good project with the local Vancouver International Airport (YVR) on just that. The project used simulation to come up with easy-to-follow shift rules for the security line-ups so that 90% of the passengers would wait for less than 10 minutes to go through. In fact, the project even caught the attention of the media, and was broadcasted on the Discovery Channel (how cool is that, and how fitting for OR work). Watch it here. Now come on, BAA, you can do better than this.


Related articles:
Security Screening: Bottleneck Analysis (a mathematical quantification of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Update (9 Oct 2010):
in this article, we erroneously stated that the airport operator was BAA (British Airports Authority). In fact, BAA was forced to sell Gatwick to please regulators seeking to break a monopoly on UK's airports. Our apologies to BAA. The current owners are Global Infrastructure Partners, who also owns 75% of the London City Airport.

Saturday, April 17, 2010

Hollywood stock exchange to become reality?

A year and a half ago, we wrote an article, Forecasting Hollywood movie box office revenue with HSX trading history, based on a talk by Natasha Foutz at the 2008 INFORMS Conference in Washington, DC.

Today I see in the news (Movie futures market approved) that trading of futures related to movies' box office success is about to become a reality. There may be some legal and political obstacles left to surmount, but there may yet be more data to work with in this line of research.

Curiously the article focuses on financial aspects of the new financial instruments rather than the consequences for Operations Research. Market liquidity and hedging by large and independent film financiers is a laudable goal, but think of the statistics!

I would be interested to know what sort of use movie theatres/cinemas could make of these predictions when making operational and strategic decisions regarding film selection and scheduling.

Sunday, March 28, 2010

The 5 acts of the financial crisis - review of The Power of Yes

Ever wanted answers to some of the many questions in your head on the current financial crisis? Want to know how the story started? David Hare's play, The Power of Yes, at the National Theatre in London kept me on the edge of my seat feverishly taking notes in the dark, and hanging onto every word said in the 1hr45min stage play. If you have the chance, see it. For someone like me, who's not had much to do with finance but would like to understand, this is investment 101, with interesting, non-monotone lecturers. (Actually, I did take Investment 101 in an MBA module during my master of management in operations research program at the Sauder School of Business in Vancouver, Canada, and the prof was quite fun.)



The story informatively reveals to the audience the complexity of the crisis' origin, however, mainly pointing fingers at the bankers, the governments and the mathematical models which claim to predict the future. Altogether they upset the balance of greed and fear, which the financial market and capitalism survive on. The story tells of the current (2007-present) financial crisis in 5 acts: SLUMP.

  1. Sub-prime
  2. Liquidation
  3. Unravelling
  4. Meltdown
  5. Pumping

1. Sub-prime loans (this is the longest part as much history is involved)
Hare starts the storytelling with a mathematical formula (which perked me up right away) - the Black-Scholes formula for option pricing. Wikipedia says, "Trillions of dollars of options trades are executed each year using this model and derivations thereof". That's why Hare went straight to it, and throughout the play referred to the model and its derivations as to "claim to predict the future". Also mentioned was the Monte-Carlo model of the probability of defaulting.

Perhaps I'm biased, as this is operations research in finance, but I would disagree with Hare's statement about the models claiming to predict the future. All models are an approximation to the real world, but aren't the real world, so they always have inherent flaws and limitations. Understanding the limitations is the key to applying the results from the models in the real world, otherwise it is foolish and risky. If you read through the Wikipedia article on the Black-Scholes formula, you will see that it also tries to make this point across to the readers. Assumptions such as a 'rational market and behaviour' and 'normality' goes out of the window when in a financial crisis like the stock market crash, and the model becomes defunct.

Having set the theoretical stage and outlined one 'villain', Hare goes on to illustrating the roles of the second 'villain' - the governments, in particular, the British and the US governments. In 1997, the British government made the Bank of England an independent entity, and gave the regulatory and monetary policy setting responsibility of the financial system to a new body, FSA (Financial Services Authority), so that the banks could concentrate on running the bank business and managing its products. However, Hare argues that this division of responsibility meant no one was responsible for the overall financial system. The FSA was more of a neighbourhood watchdog than a police of the system. Also, the financial sector grew to 9% of UK's economy paying the government 27% of the taxes it collected. It was a big cash cow, and no government wanted to limit its growth. In fact, the Bush administration wanted every American to own his/her home, which only encourages borrowing.

Then the third 'villain' is revealed, the banker. The banker is greedy, and is driven to be so by targets and "regular incremental growth". The banker encouraged the people to buy homes when they couldn't afford it, and pressured the credit rating agencies to give good credits so the people can get loans. The division of responsibility meant no one was ensuring the credit ratings were reliable when the banks pushed to make more money by lending it out to every living and breathing person, but who can't actually afford it. Sub-prime loans.

2. Liquidation
("The conversion of assets into cash. Just as a company may liquidate an entire subsidiary by selling it to another firm, so too may an investor liquidate by selling a particular type of security.")

Why were the bankers pushing for more loans? Because homes = assets, and assets = more leverage to lend out for the banks. In fact, The Royal Bank of Scotland (RBS) was lending out at 30-to-1 leverage ratio (i.e. you lend out £30 based on £1 of asset).

The game of slicing and dicing of assets into packages and then trading it with other financial institutions (i.e. selling / liquidating debts) meant that soon enough no one knew what was in those packages, but some of them were sub-prime loans, which was toxic debt. The concept of toxic debt is well explained here: "The easiest way to describe toxic debt is to see it as two separate issues. One, large amounts of loans were improperly given higher credit ratings (implying lower risk of default). The second is that the value of the homes securing these loans has dropped".

3. Unravelling
Credit = Trust. Toxic loans ==> bad credit ==> no trust.

On August 9, 2008, banks lost trust, and stopped lending money. One quote from the play says, "Banks don't go bankrupt for any other reason... but that they ran out of money". This brought the financial system to a halt. Capitalism was having a cardiac arrest. Let's just say the media didn't help and drove fear steady into the mass.

4. Meltdown
Subsequently, the cardiac arrest brought down Lehman Brothers in the US first, and in the UK, Northern Rock went down as the nation's first casualty. The fall of Lehman Brothers triggered a world-wide panic and collapse of 'trust' in the financial system. People in the UK were queueing for their money from the banks. The Brits love to queue for things: a quote from the play, "When the Brits see a queue, they join it". In the US, the big financial institutions went one after the other into troubles.

5. Pumping
The US government had to bail them out by spending hundreds of billions of dollars. And if they didn't do so, the other sectors would be dragged down by the fall of the financial sector as well. Then other governments followed suit as this is a global financial crisis, and now governments are wasting and pumping money into the economy to try to rescue it.


This wraps up the 1hr45min play with no intermission. I think the title, the power of yes, is referring to the 3 'villains' of the story saying yes to lending recklessly, and therefore creating debt-laden societies. What's your interpretation? I hope I've done the play justice. I thoroughly enjoyed it, and learned lots from it that is helping me shape my understanding of the financial crisis. I wonder why my alma mater didn't include any financial applications of operations research in the programme. Is it because it is so easily misunderstood by newcomers? Then wouldn't that be a reason for teaching it more broadly?

Wednesday, March 10, 2010

American Doctors Thoughts on Obama's Health Transformation

Came across this article on Capgemini's Health Transformation blog: Doctors thoughts about Obamacare. It is too funny not to share. Also note, it is most likely written by an American, or someone connected to the American healthcare system, due to the word usage of "Anesthesiologists" and "Pediatricians", because in the UK, people would say "Anaesthetists" and "Paediatricians", while in Canada, it would be "Anaesthesiologists" and "Paediatricians" - much like the word choice of "trash", "rubbish", and "garbage". :)

Members of the medical community has weighed in on the new health care plan being developed by the Obama Team:

The Allergists thought that it should be scratched,
and the Dermatologists advised not to make any rash moves.

The Gastroenterologists had a bad gut feeling about it,
while the Neurologists thought the Administration had a lot of nerve.

The Obstetricians felt Obama is laboring under a misconception.

Ophthalmologists considered the idea shortsighted.

Pathologists yelled, "Over my dead body!"
while the Pediatricians said, 'Oh, Grow up!'

The Psychiatrists thought the whole idea was madness,
while the Radiologists could see right through it.

Surgeons decided to wash their hands of the whole thing.

The Internists thought it was a bitter pill to swallow,
and the Plastic Surgeons said, "This puts a whole new face on the matter."

The Podiatrists thought it was a step forward,
but the Urologists were pissed off at the whole idea.

The Anesthesiologists thought the idea was a gas,
and the Cardiologists didn't have the heart to say no.

In the end, the Proctologists won out,
leaving the entire decision up to the a**holes in Washington!

Saturday, February 27, 2010

Surveys, statistics and statistically significant economic tremors

Once in a while, an article would pop up in the news and make me go, "oh great, here comes another guy who is talking about statistics, but knows nothing about it". This article on the BBC made me feel just like that, but luckily only in the first half: How one woman can cause economic boom or bust. However, having finished reading it, I came to appreciate his point. He is portraying how the world, especially when in crisis like these days, is reacting to 0.1% changes in unemployment rate or deviation from economic forecasts, without fully understanding the data source the conclusions are drawn from, or the statistical significance level it can be trusted to.

The author goes quite the distance to move his reader's emotions, and raise my suspicion:

She (the lady in the fictitious story who just lost her job and by chance was surveyed by the Labour Force Survey) is just one of those surveyed. But Eve, unknowingly, is about to move mountains. She will make economies tremble with a 30-minute interview and a cross in a box on a laptop questionnaire.

Vast sums of money will lurch round the world's financial system. Politicians will reel and businesses be broken.


But then he comes back across the line and is in my good books again:
Check the ONS (Office for National Statistics - UK) and it states clearly that the figure is accurate only to 0.2 per cent, most of the time. This means that a rise of 0.1 per cent in the unemployment rate could be consistent with an actual fall in unemployment across the whole economy of 0.1 per cent.

I like his final point the best, suggesting how people should treat survey results - more like clues, not knee jerk reactions to trigger panics:

... feverish times make attention twitchy. Every piece of evidence about the state of the economy is interpreted, explanations offered, forecasts recalculated, and much is made out of little, perhaps too much.

The difference between a rise and a fall is judged with solemn faces when the truth is the change we observe may not even be there. Economic data is never a set of facts; it is a set of clues, some of which are the red herrings of unavoidable measurement error.

Friday, February 12, 2010

Happy Birthday to ThinkOR.ORG - 2 years old

Happy Birthday to ThinkOR!

We are 2 years old. :)

Feb 10, 2008 was when I first registered and started the blog to promote Operations Research (also because I was looking for any reason not to study for exams). As a newcomer to OR, it bothered me that people did not know what OR was/is. It still bothers me, but a bit less now, knowing that I'm doing something (albeit very little), to try to change that, an article at a time.

I've since got a few more contributors to ThinkOR.ORG (thank you guys!), have met a few fellow OR bloggers (hi! *wave*), and have a small group of regular readers. Every month, hundreds of people all around the world (130 countries to be exact) visit the blog. How can you not love technology?

In return, I am always on the look out for interesting topics to write about to share with you all. Now I'm going to pack my backpack in the most optimal way with the objective of minimising space and weight for my trip to India tonight. I have a hunch that I'm going to have a few interesting posts coming up in the next little while. ;)

Happy blogging!

Saturday, February 6, 2010

Bachelor Efficiency.

It seems to be a known fact that confirmed bachelors are at times amazing inventors of time and labor saving methods, gizmos, and procedures. Here is another one.

Recently I was visiting my bachelor friend John at his house and when I was rummaging in his drawers, searching in vain for a spoon, he has proudly shown me his latest labor saving device, (which also explained the lack of spoons in the drawers). He didn’t claim the idea as his own, on contrary; he said it is becoming a trend among their bachelor brethren, but I have seen it for the first time.

He has purchased himself two dishwashers, installed them side by side and is using them alternatively. Filling the one with dirty dishes and taking the clean dishes out of the other. He owns just enough dishes to fill one dishwasher up. This way, when he runs out of dishes, he switches the one full of dirty dishes on and reverses the process. He reports with an extreme satisfaction that he never needs to unload the dishwasher and file the dishes back into the drawers and cupboards. I think there is a lesson here for OR in it.

I’ll call it “The Bipolar Dishwashers Method”.

Monday, February 1, 2010

Healthcare system improvement project management: making a big team work

It's tough chairing meetings, tougher chairing a big meeting (10-15 people), and tougher yet chairing a big meeting that's supposed to last an 8-hour day, one day a week for 6 months. A lot of planning goes into making such a day work with team members varying from the analytical kind to the "feeling" kind, from the surgical kind to the managerial kind. I'm slowly to get a hang of it having done it for a couple months now. The following is a lot of common sense, but if one doesn't have the chance to go through this kind of work with big teams, one may not think it so obvious as an approach. Thought I'd share for whatever it's worth.

  • Make sure everyone is doing something - feeling of usefulness in the group, or else people will feel disengaged.


  • Assuming natural progress of project is from problem discovery, to analysis, to design and implement, and assuming that everyone in a team needs to participate in all phases, then keep telling self that as soon as we get through to design, things will become more exciting. Analysis phase is not everyone's cup of tea, even though geeks like me find it most interesting.


  • Spend the time and create a big poster out of rolling parchment paper. It becomes a live document of all work done on the project to remind team in every meeting of key aims and work accomplished so far. It is a pat on the shoulder for work well done, as well as always showing the direction for the team. Sometimes, one can't see the forest for the trees.


  • Big team, big scope - recipe for getting lost or losing sight easily; remind team of aims frequently; relate how current tasks contribute to the aims.


  • Identify one lead for each main task to be done in the implementation phase. Give team members enough time to develop own plans on how to implement, and write the document themselves to instill ownership from the start (do not use admin resources to do this). Sometimes it takes 2-3 days just to write and re-write the implementation plans, but the time is worth while, not because we need to have a perfect plan as that is unrealistic, but because it forces people to think of all nitty gritties of how get things done and how they would get around specific change management problems. Provide a good example from a colleague of theirs (real examples from real people = trust), but encourage and give them room to be creative. Then everyone on the team should peer review each other's plan with specific review criteria.


  • Once you have all of the above done, engagement level should be pretty high by now, as a healthy amount of sweat and tears will have gone into the implementation plans. I bet anything that you won't be able to hold people back on actioning out those implementation plans.

There you have a much happier and motivated team. There is no sure recipe. This isn't one by any means, but it is working for me so far.

Friday, January 29, 2010

CORU Clinical Operational Research Unit - London health care OR team

CORU - Clinical Operational Research Unit, based in UCL (University College of London), is a London health care OR team - the first I've come across working in OR specialising in health care, since I moved across the pond last year from Canada.


Needless to say, I was very happy to meet up with Martin Utley, Director of CORU, last week. Thanks for a great chat, Martin. I'm genuinely excited to link up with the CORU group, as I have not yet met any OR bodies in health in UK yet. Reading up on some publications that Martin sent over - I do miss the academic side of Operational Research.


It was said that OR used to exist quite healthily in UK's health sector before (very close to the Canadian system). After some reform / re-org within the National Health Services (NHS), most of the OR groups within the NHS disappeared (more or less). What a pity.

Saturday, January 2, 2010

Psychotherapy and Operational Research / Change Management

Happy New Year to the ThinkOR readers and the Operational Research community.

What better way to celebrate the new year than learning something new!

1. "Although there are many details about our patients that we cannot know, nonetheless, our task is to delimit a system of observation in which we can trace the essential causal chains, and find accessible points, or handles, where interventions can be made."
2. "...It is perhaps clear... that the choice of a system is not only dependent upon the nature of reality, but also upon the means we have to investigate it and the purpose of the inquiry. The larger the system we choose, the safer we can be in assuming that it will include the relevant causal relationships. However, such a system may not be manageable and therefore of no help at all."

Upon first glance, these would look like quotes from an Operational Research book. However, they are in fact quotes from a book titled Integrated Psychotherapy, published in 1979 by the wonderful family friends, Doctors Ferdinand Knobloch and Jirina Knobloch, who are renowned Psychiatry Professors specialising in psychotherapy. I want to share with you the similarity of a psychotherapist's task and an OR practitioner's.

Never would I have thought that there'd be anything in common between Operational Research and Psychotherapy, a branch of Psychiatry, treating patients with mental health problems through communication and contact, without medication. Wikipedia's definition of Integrated Psychotherapy is:

Integrative psychotherapy may involve the fusion of different schools of psychotherapy. The word 'integrative' in Integrative psychotherapy may also refer to integrating the personality and making it cohesive, and to the bringing together of the "affective, cognitive, behavioral, and physiological systems within a person".

The first quote from the Integrated Psychotherapy about a psychotherapist's task made me think of my work immediately. I am currently a project manager at a children's hospital in London working on process improvement and transformation projects. When we go about solving systematic problems within a process to improve it, it is impossible that we understand all details of such a process. Our goal is, as exactly Dr Knobloch's describe, to find out enough information to diagnose the problem, understand why the problem exists ("trace the essential causal chains"); then we need to identify the levers to improve upon it, to successfully apply any change management ("find accessible points, or handles, where interventions can be made").

The second quote about the choice of a system rings rather true for any simulation projects. The perfect system is the real world itself, but it would be rather impossible to simulate it.