Showing posts with label Operations Research - Applications. Show all posts
Showing posts with label Operations Research - Applications. Show all posts

Sunday, November 23, 2014

Spam gets a personal touch: Human 1, Machine 1

Blogging and spamming practically come hand in hand. The obvious ones have been pretty well controlled by the major blogging platforms' spam filters, thanks to advances in text analysis and machine learning algorithms. However, it is not perfect, or is it - you be the judge in this case.

This could be an example of how creative spammers are at combating algorithms.

Or, it could be an example of a business owner trying to do his own selective SEO (search engine optimization).

An old post on mandatory school uniforms got the following spam:
I think school uniforms must be compulsory in schools because after one time-investment in the uniform, it prevents the child from the traits of social inequality,inferiority complex etc.And If you have decided to buy the uniform, buy it from Wang Uniforms (link removed)

I speculate that a human wrote the comment, because it is a sensible comment, and also because of the grammatical, punctuation and spacing errors.

However, the link, which I removed for this post, does point to a legitimate school uniform maker in the UAE. I suppose there are two possibilities:
1) The uniform business had legitimately read the article, had something genuine to say, and also wanted to promote its own business.
2) The uniform business hired a spammer / mass commenter to do the job for SEO purposes.

I had a bit of a hard time deciding whether this is spam or not. Since I cannot edit the comment to remove the link, I rejected the comment. Especially after I found out that the profile for the commenter was some jewelry shop in South East Asia - nothing to do with uniforms.

Algorithms are never perfect. The underlying uncertainty is why we build algorithms at all. Given I the human had trouble identifying the authenticity of this comment, I'm glad the machine (spam filter) didn't just rule it out.

So... Human vs Machine: Human 1, Machine 1?


P.S. Unrelated, but this is quite funny. Don't be fooled by the title.
Visualizing Big Data

Wednesday, August 14, 2013

Everybody likes to predict, but nobody likes being predictable, nor told what to do

The Netflix algorithm is in the news again.
The Science Behind the Netflix Algorithms That Decide What You’ll Watch Next

Netflix finds rating predictions are no longer as important, trumped by current viewing behaviour, i.e. what you are watching now. However, browsing through the comments, and again, you will see a generally negative reaction. Some people really hate being told what to watch, even if it's just a recommendation. Others say Netflix sucks, because it recommends things they've watched elsewhere. That sounds like a lack of understanding: if you don't tell Netflix you've watched something already, then how could it know?

As "big data" gets more media attention, it is reaching a wider audience who don't yet understand how algorithms work, but only know there are algorithms everywhere in their life, and it's scary to them. The lack of understanding seems to create fear and resentment.

LinkedIn and Facebook's recommendation systems for helping people find colleagues or friends they may know are generally well received, yet these film recommendation systems aren't. The difference between them might underline the success criteria of rolling out such recommendation systems.

Tuesday, August 13, 2013

Machine Learning in Movie Script Analysis Rouses Angry Reactions

An application of Machine Learning is covered in the news lately: movie script analysis.
Solving Equation of a Hit Film Script, With Data

They "compare the story structure and genre of a draft script with those of released movies, looking for clues to box-office success". However, the comments reveal that the general population (at least of the commenters) dislikes the concept for fear of anti-creativity.

Comments like these sum up the overall sentiment:
"Using old data to presage a current idea is both terrible and foolish. It is to writing what Denny's is to fine dining - mediocrity run wild."   
"Data crunchers will take the art out of everything. Paint-by-numbers."  

Ouch.
You be the judge whether this is a good application or not.

I tend to bias towards answers like this from the comments (sadly this was only 1 of 2 positive comments at the time of my reading; the other one was from the CEO of the script analysis business):
"I'm sure people have all sots of assumptions about what audiences like already. This data could be a tool to look deeper into these assumptions. Film makers have always wondered about consumer taste. It is a business. When commerce and art mix, there are inevitable compromises. This tool helps people see possible preferences based on past behavior. Information should never frighten us. It is how this information is applied that most deserves our attention." 

I think it also never helps the image of such machine learning practitioners when the journalist tries to paint him with an antagonist brush, such as "chain-smoking" and "taking a chug of Diet Dr Pepper followed by a gulp of Diet Coke and a drag on a Camel". Reminded me somewhat of another writer's writing style when covering analytics.

Monday, August 12, 2013

Value chain trumps good design - ColaLife

Babies in Africa suffer and die from diarrhoea, but it's easily treatable with medicines that costs pennies. The problem is getting the medicine into the mothers hands - a supply chain problem in a rural and sparsely populated area.

Here comes ColaLife: Turning profits into healthy babies.

Inventing medicine packaging to fit into coca cola bottle gaps is ingenious, but understanding the value chain, so that all hands that touch the supply chain of the medicine has an incentive to ensure its stock and flow, is even more important.

If there is only one message to take away, I would choose:
"What's in it for me?" 
Always ask this to make sure there is a hard incentive for all players to participate. Free give-aways are often not valued, resulting in poorly managed resources and relatively low success rate. Ample training and advertising for awareness and effective usage is also key for product / technology adoption.

Saturday, August 3, 2013

The Slightly Rosier Side of Gambling Analytics

Having posted about the ugly side of analytics - casino loyalty programmes, the Guardian's DataBlog caught my eye with their article on a rosier side of gambling analytics, where UK technology firm uses machine learning to combat gambling addiction.

Of course, a business is still a business. It needs to be profitable, so there are reasons more than just "let's be good". I list out below my take on the reasons for "them" the gamblers clients, and the reasons for "us" the casinos. Note, I simply assumed the machine learning study is sponsored by the casinos.

Just for "them":

Casinos too have a corporate social responsibility (CSP). Helping pathological gamblers, or identifying them before they become one is a nice thing to do.

For "them" and for "us":

More for everyone! They get to play more, and we get to profit more. The more people play a bit for longer is better than playing a lot for a short amount of time due to self exclusion lists. (I'm not sure which is the better evil of the two though...)
That's the business case. It's not all soft and cuddly like the CSP. Well, ok, business cases almost never are.
"If you can help that player have long term sustainable activity, then over the long term that customer will be of more value to you than if they make a short term loss, decide they are out of control and withdraw completely"

Just for "us":

Minimising gambling problems helps keep the country's regulators off the companies' backs, so they don't have to relocate when the country's regulations tighten. Relocation = cost. A lot of it.
Plus,
"And there's also brand reputation for the operator. No company wants to be named in a case study of extreme gambling addiction, to be named in relation to a problem gambler losing their house"


A side note: This reaffirmed why I don't gamble...it's a lose-win situation.

"A lot of casino games operate around a return-to-player rate (RTP) whereby if the customer pays, say £100, the game would be set up to pay back an average of £90. Different games will have different RTPs, and there are a few schools of thought on whether certain rates have different impacts on somebody's likelihood of becoming addicted.Some believe that if you lose really quickly, you'll be out of funds very quickly and will leave, and that a higher RTP will keep people on site, but others disagree"

I highly recommend reading the full article on the DataBlog.


Thursday, August 1, 2013

The Ugly Side of Analytics - Casino Customer Loyalty

While listening to This American Life's episode "Blackjack", its Act 2 had me in the car saying, "oh no, they did not!"  The "they" is the Caesars Entertainment Corporations (the casino), and yes, they have a customer loyalty programme that they use to "attract more customers", and claim it's no different than other such programmes in industries like supermarkets, hotels, airlines or dry cleaners.

Well...there is a wee bit of difference.

No one is addicted to dry cleaning.

I am saddened that analytics is used to help the casino loyalty programme and hurt the pathological gamblers. The show indicates that the programme identifies "high value customers" using loyalty cards, tracking all spend and results, and then offer them the "right" rewards to keep them coming back. Most addicted gamblers are "high value customers". The bigger the looser, the more the reward. Rewards include drinks and meals, hotel suites, trips to casinos (if you don't live there), to gifts like handbags and diamonds.

Analytics and Operational Research is supposed to be the Science of Better.

I'd like to call on all professionals in the analytics field to reflect on the moral goodness, or lack of, in your work.

There is still hope though. If casinos can use analytics to identify problem gamblers, then others can too. Given pathological gambling is a mental health issue, is it time for NGOs or governments to catch up with technology and get their hands on those loyalty card data?

Friday, December 30, 2011

Operational Research Consulting & Data Journalism

As data becomes more and more accessible, together with visualisation tools becoming more available and user friendly, Data Journalism is heating up. I've been following the Guardian's Data Blog enthusiastically, it is full of interesting information relevant to current affairs, explained with much facts and data.

This article talks about the 10 point guide to data journalism. I particularly like point 5:
Data journalism is 80% perspiration, 10% great idea, 10% output
The Prezi under point 5 explains the process of how data is used to support news, the angles to consider when mashing datasets together, the technical challenges of working with data, iterative calculation and QA process, which finally get turned into the beautiful output with the various (mostly free) visualisation tools.

This is practically the same process that an Operational Research consulting project takes - or any application of OR or Science in general:
  • Understand what the problem/question is
  • Create a hypothesis to be proven or disproved
  • Define what data is needed for the quest
  • Get the data
  • Clean it, and manipulate/wrangle with it so it's usable for analysis
  • Analyse/calculate to come to some conclusion - hence proving or disproving the hypothesis
  • Compare it to subject matter experts' view on what the likely answer should be (sanity check)
  • Refine the analysis until satisfied
  • Shape the output message so it can be easily understood by the audience
  • Communicate the findings
  • All throughout the process, keep communicating to the audience to make sure they are engaged and understand (principle-wise) what you're trying to do, so that they are not unpleasantly surprised when the final answer is presented
  • Best yet, to ensure smooth change management if your solution is to be implemented, work closely with the end users from the start of designing the solution, and then implement and test, so that they believe in the solution because they were part of the creation process.
As the Flowing Data blog points out, this is what statisticians do. I will add that this is what Science does in general. I will also say that in practice, the first step, "understanding what the problem/question is", often takes 70-80% of the time. The technical 'doing' to follow, in practice, is relatively easy compared to what our academic institutions thoroughly prepare us for (which is needed).

For those interested in the how of data journalism, read this about the work that went into reporting on the 2011 London Riots. Fascinating social media analytics at work. Not easy. Impressive and very interdisciplinary.

P.S. Most of this post has been sitting as draft since the summer, hence referencing 'old' news. It's still relevant, so why not.

Sunday, July 31, 2011

An Alternative Way to Fly (as long as expectations are managed)

The purpose of this post is to share the discovery of an alternative way of operating an airline (flight schedule and route wise).



No matter how airlines degrade their service standards these days in the West, I think it's fair to say that most of us still believe that most airlines *intend* to:
  • Take off on-time
  • Land on-time
  • Fly us from A to B as the ticket says, without surprise stops
  • (Oh, and have toilets, of course)

On a recent trip to Ethiopia, we have been shown a rather different way of operating an airline. It contradicts with all of the above, but it works. We took 4 internal flights.

Here is how we experienced them first hand:
  • 1 left on time as per the ticket, and even got us there early (bonus!), because...
  • None of the 4 flights flew the original path it said it would: stopovers were skipped to go direct instead, or the direct flights got stopovers added onto it last minute
  • None of them arrived late, because...
  • Some of them took off earlier than stated
  • Additionally, the air stewardesses were lovely, and they gave passengers snacks and drinks (*gasp* what novelty!)
  • To their credit, they did try to inform passengers of the changes a couple of days ahead of the flight (in our case by email, which we only read after we got back to London).
  • They also tell passengers to double check the flight times a couple of days before, to be aware of any late changes.
(For your curiosity: the international flights from London to Addis Ababa was quite standard. The only oddity was that they weighed everyone's carry-on luggage at the gate, because it's apparently a popular flight to take lots of stuff with you!)

IMHO, an airline would play this game, because: (we suspect - unconfirmed)
  • It wants to minimise costs - mainly fuel in this case.
  • It has 1-2 planes that fly in circles to cover off a handful of popular destinations.
  • As the airline gets more and more requests for seats through the form of purchased tickets, it is faced with an optimisation problem to fly all its customers to their expressed destinations with minimum cost. The best way to do this is probably through re-shuffling the schedule. For instance, if a plane is hopping from A to B to C in sequence, where B is closer to A than C is, and if we discover 2 days before the flight that the plane is filled with 2/3 passengers going to C, and 1/3 going to B, then flying A->C->B is cheaper than A->B->C. What if there are customers wishing to go from B to C? We hear that the airline is known for cancelling flights as well. Luckily, we didn't experience this.
This way of operating an airline is possible, because:
  • It is a monopoly.
  • The number of flights are few, so it's easy to manage change.
  • Customers expect it and adjust flying behaviour accordingly (i.e. always check the flight times before the day of flight, and always leave wiggle room before and after the flight).
  • For foreigners who are used to the typical western airline service (i.e. expect it to take-off and land on-time and fly the route it says it would), the price justifies it and shuts people up from complaining, and instead people will have a laugh (or write a blog post!) about it.
  • It doesn't call itself "Precision Airline" (the Tanzanian airline), and can afford to deviate a little. 8-)
P.S. If you are planning to visit Ethiopia, and intend to fly within the country, you may want to consider buying the tickets within the country rather than online. It is significantly cheaper due to price control. This is true as of spring 2011, so double check this before you travel.

Monday, February 7, 2011

I heart smartphones and podcast favourites

I heart smartphones. It is the symbol of the new world, where the world is at your finger tips, and, in your pocket! There is so much information out there, digesting it is a big quest. I'd love to have the time to sit down and browse the net for a couple hours every day to catch up on all the news and events, but now I can do all this while on the move.



I am an owner of an HTC Hero on Android. It is the only digital device I carry in my hand bag (other than my obligatory work phone). Living in a busy city like London means I spend a fair amount of time in transit. If you are a Google fan like me, then Google Reader and Google Listen would be your good friends. My favourite activity during transit when I'm not walking about, is to catch up on the news and my favourite blogs through the RSS reader. My favourite activity during transit when I am walking about, is to plug into one of the following podcasts, which keeps me informed and entertained. If this is not optimising your time, then I don't know what would. I guess the next step is to jog to work while listening to podcasts: information downloading and calorie offloading all at once!

  • LSE lecture and events: London School of Economist half hour to hour long lectures or guest speakers plus Q&A session (frequent publishing of events)

  • The Economist: I like the magazine, but there is so much content to digest. The podcasts do a great job summarising the highlights (weekly publishing or more frequent ones available too)

  • NPR News: short bursts of news that keeps me informed of the North American highlights (hourly publishing)

  • Science of Better: Operations Research podcasts/interviews by INFORMS (monthly publishing)

  • More or Less: BBC radio programme making sense or debunking the numbers behind the news

  • Freakonomics: spin off by the authors of the ever so popular Freakonomics book/movie/blog/etc.

What are some of your favourite podcasts?


Aside from being my RSS reader and podcast player, my smartphone is also my:
- phone (first and foremost)
- email
- calendar
- access to the internet
- Skype to call anyone around the world
- instant messaging to keep in touch with friends
- handy document storage
- camera / video cam
- GPS and compass
- maps (offline maps too)
- ebook reader
- notebook (takes my hand scribbling too)
- news reader
- scanner
- games when I'm bored waiting in a queue somewhere
- MP3 player
- all the other things that come with a phone (alarm clock, calculator, voice recorder, etc.)
- and thousands of other applications available for download (often for free) that keep my life organised and what not

Saturday, October 9, 2010

Expedia Revenue Management at Check-out or Rule Compliance

We have all been shopping online for something only to be told after making the purchase decision that it is no longer available or no longer available at that price. This often happens when buying flights, as prices can change minute-to-minute and you can be left with a much higher ticket price which makes you abandon your purchase. Disappointment all around.

However, the opposite happens from time to time as well! The price of a London to Seattle flight, when I found it was £649.07 (including all fees). I clicked to start jumping through all the purchase hoops, but after a couple steps into the check-out process, it flagged up, rather alarmingly, as £616.07. That's a 5% decrease in price. (See, I'm not making it up!)





I was pleasantly surprised, of course. But why would they do that?

I've got 2 suspicions.

1. Revenue Management / Yield Management / Consumer Psychology
In the weeks prior to this screen capture, I've been to the site a few times already looking for the exact same flight. Even though I'm not logged in, I'd venture to guess that the site has looked up my cookies and knew that I've been looking for these flights. Therefore, it should know that I was a likely buyer, rather than a window shopper (pc pun intended). I've been at the check-out stage before, but have abandoned the shopping cart eventually. It would be quite logical for the site to entice me with a lower price as a 'pleasant surprise' to finally get me to spill my moola. Not to mention the positive impression it's left with the shopper (look what I'm doing now - free advertising!).

However, is it worth the 5% price drop? How does Expedia decide 5% was the right balance of customer incentive and revenue loss? I was already a willing customer, ready to bite. Isn't it just giving the 5% away for free? In my case, it's difficult to say whether the move has gained my loyalty to Expedia, because I was already a frequent visitor and buyer there. It may have re-enforced my loyalty though. It would be very interesting to analyse a few year's purchase and cart abandonment data of customers where this has happened to, versus a control group. Would we observe a lower purchase completion rate, which would drive a higher lifetime revenue per customer?

2. Airline price adjustment rule compliance
There could exist such a regulatory rule in the online airline pricing world to protect consumers, such that the vendor must notify the buyer of last minute price changes before the final purchase is completed. Now, I don't know if such a rule exists, but it is possible. However, it sounds extremely difficult for the regulators to enforce and monitor compliance.

I personally think it's more the former than the latter. One way to test the real reason behind the price drop could be to see if it's always a 5% decrease. Time to do some more flights window shopping!



P.S. In a previous article where we observed operational inefficiencies at London's Gatwick Airport, we erroneously stated that the airport operator was BAA (British Airports Authority). In fact, BAA was forced to sell Gatwick to please regulators seeking to break a monopoly on UK's airports. Our apologies to BAA. The current owners are Global Infrastructure Partners, who also owns 75% of the London City Airport.



Update:
Responding to two unconstructive comments, one of which was downright rude and was deleted, we thought we would add to this article.

The commenters suggest that Expedia is not a price setter, but just a re-seller making possibility one above unlikely. That said, the question still stands, "What's going on here?". If the prices that Expedia gives you when you search are cached and not live, that seems to be to be a surprising shortcoming. If they are, why offer a lower price to someone who appears to have already made the decision to purchase?

There are probably a number of factors at play that someone from the online travel community could answer.

If I were reselling through Expedia, I would want my price-updating algorithm to give the higher of the two prices at the point of payment, i.e. more profit. Both Expedia and the vendor are motivated to collect a higher price and therefore a higher commission as a percentage of the selling price.

The commenters may be very correct in saying that Expedia doesn't set the price, but merely re-sells at whatever the price the vendor names. That's why we said there were two possibilities, the second being not revenue management. However, if Expedia is not practicing revenue management in this way, they probably should at least experiment with it. Their commission represents a headroom within which they can optimize and the goal, after all, is not to make the greatest profit on each sale, but instead the greatest profit across all possible sales.

Thursday, May 13, 2010

Security Screening: Discrete Event Simulation with Arena

Simulation is a powerful tool in the hands of Operations Research practitioners. In this article I intend to demonstrate the usage of a discrete event process simulation, extending on the bottleneck analysis I wrote about previously.

A few days ago I wrote an article demonstrating how you could use bottle neck analysis to compare two different configurations of the security screening process at London Gatwick Airport. Bottleneck analysis is a simple process analysis tool that sits in the toolbox of Operations Research practitioners. I showed that a resource-pooled, queue-merged process might screen as many as 20% more passengers per hour and that the poor as-is configuration was probably costing the system something like 10% of its potential capacity.

The previous article would be good to read before continuing, but to summarize briefly: Security screening happens in two steps, beginning with a check of the passenger's boarding pass followed by the x-ray machines. Four people checking boarding passes and 6 teams working x-ray machines were organized into 4 sub-systems with a checker in each system and one or two x-ray teams. The imbalance in each system was forcing a resource to be under utilised, and Dawen quite rightly pointed out that by joining the entire system together as a whole such that all 6 x-ray machines effectively served a queue fed by all 4 checkers, a more efficient result could be achieved. We will look at these two key scenarios, comparing the As-Is system with the What-If system.

The bottleneck analysis was able to quantify the capacity that is being lost due to this inefficiency, but as I alluded, this was not the entire story. Another big impact of this is on passenger experience. That is, time spent waiting in queues in the system. In order to study queuing times, we turn to another Operations Research tool: Simulation, specifically Process-Driven Discrete Event Simulation. Note: There may be an opportunity to apply Queuing Theory, another Operations Research discipline, but we won't be doing that here today.

Discrete Event Simulation

Discrete Event Simulation is a computer simulation paradigm where a model is made of the real world process and the key focus is the entities (passengers) and resources (boarding pass checkers and x-ray teams) in the system. The focus is on discrete, indivisible things like people and machines. "Event" because the driving mechanism of the model is a list of events that are processed in chronological order, events that typically spawn new events to be scheduled. An alternative driving mechanism is with set timesteps as in system dynamics, continuous simulations. Using a DES model allows you to go beyond the simple mathematics of bottleneck analysis. By explicitly tracking individual passengers as they go through the process, important statistics can be collected like utilisation rates and waiting times.

During my masters degree, the simulation tool at the heart of our simulation courses was Arena from Rockwell Automation, so I tend to go to it without even thinking. I have previously used Arena in my work for Vancouver Coastal Health, simulating Ultrasound departments and there are plenty of others associated with the Sauder School of Business using Arena. Example. Example. Arena is an excellent tool and I've used it here for this artilce. I hope to test other products on this same problem in the future and publish a comparison.

In the Arena GUI you put logical blocks together to build the simulation in the same way that you might build a process map. Intuitively, at the high level, an Arena simulation reads like a process map when in actuality the blocks are building SIMAN code that does the heavy lifting for you.

The Simulation

Here's a snapshot of the as-is model of the Gatwick screening process that I built for this article:


Passengers decide to go through screening on the left, select the boarding pass checker with the shortest queue, are checked, proceed to the dedicated x-ray team(s) and eventually all end up in the departures hall.

An X-Ray team is assumed to take a minute on average to screen each passenger. This is very different from taking exactly a minute to screen each passenger. Stochastic (random) processing times are an import source of dynamic complexity in queuing systems and without modelling that randomness you can make totally wrong conclusions. For our purposes we have assumed an exponentially distributed processing time with a mean of 1 minute. In practice we would grab our stop-watches and collect the data, but we would probably get arrested for doing that as an outsider. Suffice it to say that this is a very reasonable assumption and that exponential distributions are often used to express service times.

As in the previous article, we were uncertain as to the relationship between throughput of boarding pass checkers and throughput of x-ray teams. We will consider three possibilities where processing time for the boarding pass checker is exponentially distributed with an average of: 60 seconds (S-slow), 40 seconds (M-medium), 30 seconds (F-fast) (These are alpha = 1, 1.5 and 2 from the previous article). In the fast F scenario, our bottleneck analysis says there should be no increased throughput What-If vs. As-Is because all x-ray machines are fully utilised in the As-Is system. In the slow S scenario there would similarly be no throughput benefit because all boarding pass checkers would be fully utilised in the As-Is system. Thus the medium M scenario is our focus, but our analysis may reveal some interesting results for F and S.

We're focused here on system resources and configuration and how they determine throughput, but we can't forget about passenger arrivals. The number of passengers actually requiring screening is the most significant limitation on the throughput of the system. I fed the system with six passengers per minute, the capacity of the x-ray teams. This ensured both that the x-ray teams had the potential to be 100% utilised and that they were never overwhelmed. This ensured comparability of x-ray queuing time.

I ran 28 (four weeks) replications of the simulation and let each replication run for 16 hours (working day). We need to run the simulation many times because of the stochastic element. Since the events are random, a different set of random outcomes will lead to a different result, so we must run many replications to study the possible results.

Also note that I implemented a rule in the as-is system, that if more than 10 passengers were waiting for an x-ray team the boarding pass checker would stop processing passengers for them.

Results

Scenario M - Throughput Statistics


First let's look at throughput. On average, over 16 hours the what-if system screened 18.9% more passengers than as-is. The statistics in the table are important. Stochastic simulations don't given a single, simple answer, but rather a range of possibilities described statistically. The average for 4 weeks is given in the table, but we can't be certain that would be the average over an entire year. The half width tell us our 90% confidence range. The actual average is probably between one half-width below the average and one above.

Note: I would like to point out that this is almost exactly the result predicted analytically with the bottleneck analysis. We predicted that in this case the system was running at 83.3% capacity and here we show As-Is throughput is 4728.43/5621.57 of What-If throughput = 84.1%. The small discrepancy is probably due to random variation and the warm-up time from the simulation start.

But what has happened to waiting times?


The above graph is a cumulative frequency graph. It reads as follows: The what-if value for 2 minutes is 0.29. This means that 29% of passengers wait less than 2 minutes. The as-is value for 5 minutes is 0.65. This means that 65% of passengers wait less than 5 minutes.

Comparing the two lines we can see that, while we have achieved higher throughput, customers will now have a higher waiting time. Management would have to consider this when making the change. Note that the waiting time increased because the load on the system also increased. What happens if we hold the load on the system constant? I adjusted the supply of passengers so that the throughput in both scenarios is the same, and re-ran the simulation:


Now we can see a huge difference! Not only does the new configuration outperform the old in terms of throughput, it is significantly better for customer waiting times.

What about our slow and fast scenarios? We know from our bottle-neck analysis that throughput will not increase, but what will happen to waiting times?


Above is a comparison between as-is and what-if for the fast scenario. The boarding pass checkers are fast compared to the x-ray machines, so in both cases the x-ray machines are nearly overwhelmed and the waiting time is long. Why do the curves cross? The passengers that are fortunate enough to pick a checker with two x-ray machines behind them will experience better waiting times due to the pooling and the others experience worse.

This is a bit subtle, but an interesting result. In this scenario there is no throughput benefit from changing, there is no average waiting time benefit from changing, but waiting times are less variable.


Finally, we can take a quick glance at our slow S scenario. We know again from our bottleneck analysis that there is no benefit to be had in terms of throughput, but what about waiting times? Clearly a huge differenence. The slow checkers are able to provide plenty of customers for the single x-ray teams, but are unable to keep the double teams busy. If you're unlucky you end up in a queue for a single x-ray machine, but if you're luck you are served immediately by one of the double teams.

Summary

To an Operations Research practitioner with experience doing discrete event simulation, this example will seem a bit Mickey Mouse. However, it's an excellent and easily accessible demonstration of the benefits one can realize with this tool. A manager whose bottleneck analysis has determined that no large throughput increase could be achieved with a reconfiguration might change their mind after seeing this analysis. The second order benefits, improved customer waiting times, are substantial.

In order to build the model for this article in a professional setting you would probably require Arena Basic Edition Plus, as I used the advanced feature of output to file that is not available in Basic. Arena Basic goes for $1,895 USD. You could easily accomplish what we have done today with much cheaper products, but it is not simple examples like this that demonstrate the power of products like Arena.



Related articles:
OR not at work: Gatwick Airport security screening (an observation and process map of the inefficiency)
Security Screening: Bottleneck Analysis (a mathematical quantification of the inefficiency)

Tuesday, April 27, 2010

Security Screening: Bottleneck Analysis

Earlier Dawen wrote an article about her recent experience in security screening at Gatwick Airport. I thought this was an opportunity to demonstrate a simple process analysis tool which could be considered a part of Operations Research: Bottleneck Analysis.

At the airport, servers in the two-step security check process were un-pooled and thus dedicated to one another. By this, I mean that a security system with four staff checking boarding passes (step 1) and six teams at x-ray machines (step 2) were actually functioning as four separate units rather than as a team. Each unit had a boarding pass checker, two of the units had a single x-ray machine and the other two units had two x-ray machines. The consequence of this was that the one-to-one units overwhelmed their x-ray teams, forcing them to stop checking boarding passes and remaining idle. The one-to-two units were starved of passengers as the boarding pass checking could not keep up, resulting in idle x-ray machines.

We know that this configuration is costing them capacity. A very interesting question is: How much?

A Bottleneck Analysis is a simple tool for determining a system's maximum potential throughput. It says nothing about total processing time or the amount of passengers waiting in the system, but it does determine the rate at which screenings can be completed. Think of it as emptying a wine bottle upside down. Whether it's a half full bottle of molasses or a full bottle of wine, the maximum rate of flow is determined by the width of the neck (the bottleneck!). The maximum throughput rate of a system is equal to the throughput rate of its bottleneck.

The throughput of the current system is the limited by the bottleneck in each unit, each sub-system. In the case of the one-to-one units we know this is the x-ray machine, as they are unable to keep up with supply from upstream and are thus limiting throughput. In the case of the one-to-two units we know it is the boarding pass checker as the x-ray machines are waiting idly for new passengers and are thus limited. It follows that the maximum throughput for the combined system is two times the throughput of a single boarding pass checker plus two times the throughput of a single x-ray machine.

The natural reconfiguration that Dawen alludes to her in her article is one where the resources are pooled and the queues are merged. Rather than having two x-ray machines dedicated to a single boarding pass checker, passengers completing step 1 are directed to the x-ray machine with the shortest queue. In this way an x-ray machine is only idle if all four boarding pass checkers are incapable of supplying it a passenger and a boarding pass checker is only idle if all six x-ray machines are overwhelmed.

What is the throughput of this reconfigured system? The throughput is equal to the bottleneck of the system. This is either the four boarding pass checkers as a team if they are incapable of keeping the x-rays busy or the x-ray machines as a group because they are unable to keep up with the checkers. The bottleneck and thus maximum throughput is either equal to four times the throughput of a boarding pass checker (step 1) or six times the throughput of an x-ray machine (step 2), whichever is smaller.

Returning to the exam question, how much capacity is this miss-configuration costing them? At this point we need must resort to some mathematical notation or else words will get the better of us.

Readers uninterested in the mathematics may want to skip to the conclusion.

Let x be the throughput rate of an x-ray machine.
Let b be the throughput rate of a boarding pass checker.

The maximum throughput of the as-is system is thus 2x + 2b (see earlier).
If step 1 is the bottleneck in the reconfigured system then the max throughput is 4b.
If step 2 is the bottleneck of the reconfigured system then the max throughput is 6x.

If 4b <> 6x then step 2 is the bottleneck.

If we were managers working for the British Airport Authority (BAA) at Gatwick Airport our work would essentially be done. We could simply drop in our known values for b and x and reach our conclusion. For this article, though, we don't have the luxury of access to that information.

Returning to the exam question again, how can we determine what the cost of this miss-configuration is without knowing b or x?

We will employ a typical academic strategy:
Let b = αx or equivalently b/x = α.

If 4b <> 1.5 then the throughput of the new system is 6x.

The throughput of the as-is system is 2b + 2x = 2 α x + 2x.

The fraction of realized potential capacity in the as-is system is the throughput of the as-is system divided by the potential throughput of the reconfigured system.

If α < x =" 1/2"> 1.5 then it is (2 α x + 2 x) / 6x = 1/3 + α/3

What are the possible values of α? We know α is at least 1 because otherwise the x-ray machines in the one-to-one systems would not be overwhelmed by a more productive boarding pass checker. We know α is less than 2 or else the x-ray machines in the one-to-two systems would not have been idle.

We know have a mathematical expression for the efficiency of the current system:

f(α) = 1/2 + 1/(2 α) where 1 <= α <= 1.5 f(α) = 1/3 + α /3 where 1.5 <= α <= 2 But what does this look like?

Depending on the relative effectiveness of boarding pass checking and the x-ray machines, the current efficiency is as follows:


If α is 1 or 2, then the as-is system is at peak efficiency. If α is 1.5 we are at our worst case scenario and efficiency is 83.3% of optimal.

Conclusion

Based on the graph above, depending on the relative effectiveness of the boarding pass screeners and the x-ray machines (unknown), the system is running at between 83.3% and 100% efficiency. The most likely values is somewhere in the middle, so there is a very good chance that the configuration of the security system is costing them 10% of possible capacity. To rephrase that, a reconfiguration could increase capacity by as much as 20%, but probably around 11%. In the worst case a reconfiguration could allow for the reallocation of an entire x-ray team yielding significant savings.

As stated previously, a bottleneck analysis will determine the maximum throughput rate, but it says nothing about the time to process a passenger or the number of passengers in the system at any one time. We now know that this miss-configuration is costing them about 10% capacity, but there are other costs currently hidden to us. What is the customer experience currently like and how could it improve? Is the current system causing unnecessary long waiting times for some unlucky customers? Definitely. More advanced methods like Queuing Theory and Simulation will be necessary to answer that question, both tools firmly in the toolbox of Operations Research practitioners.




Related articles:
OR not at work: Gatwick Airport security screening (an observation and process map of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Wednesday, April 21, 2010

OR not at work: Gatwick Airport security screening

I fly through London Gatwick airport quite a bit, whose operation is managed by BAA (British Airport Authority). Usually, I'm quite pleased with my experience through the security screening. However, for my last flight on April 1st from Gatwick to Milano, I was quite intrigued by how poorly it was run. I didn't think it was an April Fool's joke. :) So, after I went through the lines, I sat down, observed, and took some notes.

This was how it was set up (click to enlarge).

To start with, Queue1a & Queue1b were quite long and slow moving. Basic queueing theory and resource pooling principles tell us that 1 queue for multiple servers is almost always better than separate queues for individual servers. Therefore, I was surprised to see 2 queues. Roughly 100+ people were waiting in these 2 queues combined. I waited for at least 15-20 minutes to get to the CheckBoardingPass server.

I wasn't bored though, because the second thing that surprised me was that within the same queue, one CheckBoardingPass server was processing passengers, while the other had to halt from time to time. It was because Queue2a was backed up to the server, while Queue2b&c were almost empty. After I saw how the x-rays were setup, it was easy to see that the unbalanced system was due to the 6 x-rays not being pooled together.

The effect was a long wait for all to start with in Queue1a&b, then some waited nothing at all (i.e. me) in Queue2b/c/d/e, while others waited in a lineup of 5-15 people in Queue2a/f. For the 4 CheckBoardingPass ladies, 2 of them were busier than the others, but all could feel the pressure and frustration from the passengers in Queue1a&b. For the staff manning the x-rays, this meant some were very busy processing passengers, while others were waiting for people to show up.

Also worth mentioning was that each x-ray was staffed by 5 persons: 1 before it to move the baskets and luggage towards the x-ray, 1 at it to operate the x-ray, 1 after it to move the luggage and baskets away from the x-ray, and 2 (1 male and 1 female) to search the passengers going through the gate, if they trigger the bleep. Seems very labour intensive. If they studied the arrival pattern of passengers needing to be searched, I wonder if it'd save some personnel here by pooling at least the searchers for a couple x-rays (if unions permit!).

We've had this type of problem cracked for some time now and it is surprising to see major problems still. Gatwick Airport / BAA was obviously doing quite well all the other times I've gone through. How easy it is for a good organisation to perform poorly just by ignoring a few simple queue setup rules. For example, in 2001, my master's program run by the Centre for Operations Excellence out in the University of British Columbia, in the lovely Vancouver, Canada, did a very good project with the local Vancouver International Airport (YVR) on just that. The project used simulation to come up with easy-to-follow shift rules for the security line-ups so that 90% of the passengers would wait for less than 10 minutes to go through. In fact, the project even caught the attention of the media, and was broadcasted on the Discovery Channel (how cool is that, and how fitting for OR work). Watch it here. Now come on, BAA, you can do better than this.


Related articles:
Security Screening: Bottleneck Analysis (a mathematical quantification of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Update (9 Oct 2010):
in this article, we erroneously stated that the airport operator was BAA (British Airports Authority). In fact, BAA was forced to sell Gatwick to please regulators seeking to break a monopoly on UK's airports. Our apologies to BAA. The current owners are Global Infrastructure Partners, who also owns 75% of the London City Airport.

Saturday, February 6, 2010

Bachelor Efficiency.

It seems to be a known fact that confirmed bachelors are at times amazing inventors of time and labor saving methods, gizmos, and procedures. Here is another one.

Recently I was visiting my bachelor friend John at his house and when I was rummaging in his drawers, searching in vain for a spoon, he has proudly shown me his latest labor saving device, (which also explained the lack of spoons in the drawers). He didn’t claim the idea as his own, on contrary; he said it is becoming a trend among their bachelor brethren, but I have seen it for the first time.

He has purchased himself two dishwashers, installed them side by side and is using them alternatively. Filling the one with dirty dishes and taking the clean dishes out of the other. He owns just enough dishes to fill one dishwasher up. This way, when he runs out of dishes, he switches the one full of dirty dishes on and reverses the process. He reports with an extreme satisfaction that he never needs to unload the dishwasher and file the dishes back into the drawers and cupboards. I think there is a lesson here for OR in it.

I’ll call it “The Bipolar Dishwashers Method”.

Sunday, September 13, 2009

Introducing variability, flow and processes in a funny video to anyone

I'm leading on two variability & flow management projects at the hospital right now, and the terms "variability" and "flow" are certainly not something the medics hear much about. I needed a quick way of explaining what the projects are about, what these terms mean, and what kind of problems we are trying to resolve. A colleague suggested this video from the ever popular "I Love Lucy" TV series, episode "Chocolate Factory". It does a wonderful job of making people laugh, as well as acting out some strong parallels to a process, and the variability and flow within the process. Take a look at the video (it's a funny one!) and read on for the parallels to the operation of a hospital. The doctors, nurses and patients on my team all found the video not only hilarious but also made it clear to them what we are trying to do in the variability & flow management project.



The parallels:
  • Process: the chocolates can be patients coming into the hospital 'conveyor belt'. Lucy and her friend Ethel can be the nurses, for example, (or the various clerks, doctors, pharmacists, radiographers, etc.) handling the patients, 'dressing' them up or giving them care to make them better so they can go on to the next hospital professionals, i.e. the pharmacists to receive medications in the next room down the conveyor belt. The patient traveling through the conveyor belt is a process. Similarly, Lucy and Ethel picking up the chocolate from the conveyor belt, taking the wrapping paper, wrapping up the chocolate nicely, placing the wrapped chocolate back onto the conveyor belt, and returning to the position to be ready for the next chocolate, is a process. Lucy and Ethel are the 'servers' within the process. The things they do to the chocolate are 'steps' within the process. The girls feeding the chocolate onto the conveyor belt for Lucy & Ethel in the previous room are the servers of the upstream process to Lucy & Ethel's wrapping process. Similarly, the girls boxing the chocolates in the next room, perhaps, are the servers of the downstream process.
  • Flow: The chocolates going through the Lucy & Ethel's wrapping process is a flow.
  • Variability: The speed the chocolates are placed onto the conveyor belt is a source of variability, because the speed changes, and so is the speed that Lucy & Ethel wraps the chocolate, as they have very different styles of wrapping. This results in the variable speed of the wrapped chocolates flowing out of the Lucy & Ethel wrapping process.
  • Queuing & waits - When Lucy & Ethel were running behind and when they started to collect the chocolates in front of them and in their hats, so that they can wrap them later, that's queuing the chocolates, and those chocolates are experiencing 'waits'.
  • Mis-communication: When the supervisor meanie lady shouted to the upstream girls to "let it roll" and nothing happened so she had to go to the previous room to sort it out, that's mis-communication or signal failure. :)
The video also shows some classic examples of problems around processes:
  • Isolated processes and working in silos – what is going on 'upstream' and 'downstream' is absolutely unknown to Lucy & Ethel.
  • Lack of issue escalation procedure - when the chocolates are coming too fast for Lucy & Ethel to handle, they had no way of letting the upstream or the manager know (but of course, the meanie supervisor lady didn't allow them to leave one chocolate behind).
  • Performance management - the meanie supervisor lady did not have realistic expectations on Lucy & Ethel's performance, or maybe she simply didn't have any clue about the variability of the sometimes very high demand placed on Lucy & Ethel from the upstream.
  • Reactionary management - When the supervisor lady came into the room and saw that Lucy & Ethel had no chocolates on the belt and therefore ordered the upstream to feed faster is very reactionary. She simply made the decision based on one observation / data point, and did not ask any questions about why it is that way.
Hope you find the video useful in your work as well. I'm sure you can draw parallels to other industries aside from health care. Please feel free to share it with me. Things are often best explained by humour.

Sunday, June 14, 2009

Simple Hostel Yield Management Example


Continuing on from my thoughts in Yield Management in Hostels?, in this article I present a simplified example of how a Hostel might use simple Yield Management principles to increase its profitability.

Yield Management or Revenue Management or Revenue Optimization is a set of theories and practices that help companies, typically in the transportation and hospitality industry, gain the most revenue possible by selling a limited product where short-term costs are, for the most part, fixed. Simply put, this is why the prices of plane tickets change every time you check and why you can save on hotel rooms by booking in advance.

Consider a simplified hostel. Another time I will discuss some of these simplifications. This hostel takes only single-person bookings for a maximum of a 1-day stay. This hostel has the following rooms: 6 private single rooms and one 6 person dorm. The beds in the single rooms go for £20 and beds in the dorm go for £10. The hostel has entirely fixed costs, meaning they would rather fill a bed at 1p than have it be empty.

Our simplified hostel realizes demand in two streams. The cheapskate travelers desire the cheap dorm rooms, and the wealtheir backpackers are willing to splurge on a single room. The cheapskates would choose the single rooms if they were the same price, and this is the key to my example.

Our hostel is considering bookings for July 1. Currently 1 of the 6 single rooms are booked and the dorm room is full with 6 of 6 beds taken. Currently revenue for this day is £80. This is low compared to the maximum potential of £180, but we're not concerned yet because there are still several days left to take bookings for the single rooms. However, during this time we may also have to turn away some cheapskates, as our dorm is full. Now we ask the question: What would happen to our revenue if we gave one of our cheapskates a free upgrade to a single room, freeing up a dorm bed for more bookings? Let us consider the scenarios in the following table:







New Single Room Booking RequestsNew Dorm Room Booking RequestsResulting Occupancy With UpgradeResulting Revenue With UpgradeResulting Occupancy Without UpgradeResulting Revenue Without Upgrade
5+06/6 Single, 5/6 Dorm£1606/6 Single, 6/6 Dorm£180
5+1+6/6 Single, 6/6 Dorm£1706/6 Single, 6/6 Dorm£180
x<=40(2+x)/6 Single, 5/6 Dorm£80+£20x(1+x)/6 single, 6/6 Dorm£80+£20x
x<=41+(2+x)/6 Single, 6/6 Dorm£90+£20x(1+x)/6 Single, 6/6 Dorm£80+£20x


I've colour coded the scenarios above so we can see when we would benefit from upgrading a guest, when we would suffer, and when we are indifferent. In the first two scenarios we receive enough single room booking requests that we could have filled our single rooms at £20, and thus putting a cheapskate in there for £10 hurts our total revenue. In the third scenario we do not receive enough booking requests to have to turn anyone away, so we are indifferent between the upgrade and not. Finally, in the last scenario, if we offer an upgrade, a cheapskate sleeps in as single room for £10 that would otherwise have gone empty and the dorm remains full.

Evaluating the decisions is then a matter of estimating the likelihood of each scenario and calculating the expected revenue for each choice. We evaluate the decision in the same way you would evaluate the following game: I flip a fair coin. If it lands heads I give you £2 and if it lands tails you give me £1. Naturally you would calculate that 0.5*£2 - 0.5*£1 = £0.50 and thus the game is worth playing. The expected value of the decision to play is £0.50.

In order to carry this example through, suppose the probability of there being 5 or more single booking requests is 20% and 4 or fewer is 80%. Suppose the probability that 1 or more dorm booking requests is 75% and 0 is 25%. All probabilities are independent.

Expected value of offering an upgrade = 20%*25%*£160 + 20%*75%*£170 + 80%*25%*(£80+£20x) + 80%*75%*(£90+£20x) = £103.5 + £20x
Expected value of not offering an upgrade = 20%*25%*£180 + 20%*75%*£180 + 80%*25%*(£80+£20x) + 80%*75%*(£80+£20x) = £100 + £20x

As we can see, in the example that I have just constructed, we can expect to make £3.50 by giving a guest an upgrade in the same manner that we expect to gain £0.50 by playing the coin tossing game. Now £3.50 may not sound like a lot, but scale this up to a multi-hundred bed hostel and we're talking about more money.

What made this a winning decision? The £10 we might gain by replacing our upgradee with another guest in the dorms outweighs the £20 we might lose if we have to turn someone away from the single rooms.

So what? Just how likely is this scenario? Consider Smart Russel Square, a large hostel in central London, UK. As of 9:00 pm local time on Sunday, the current bookings* for Tuesday are as follows:
  • Large Dorms (10 person and above) 159/160 booked
  • Small Dorms (9 person and below) 135/276 booked.

*data gleaned from Hostelworld.com, reliability uncertain.

Based on your gut feeling, what are the odds that they could realize an expected benefit from upgrading some of their large dorm guests to small dorms? 10 guests? 20 guests? If the large dorm beds were filled this could represent £100-£300 in additional revenue. Minus the marginal costs of the guest including their free breakfast of course. Food for thought.

Later I would like to generalize this simple scenario, discuss the simplifications, assumptions, limitations and extensions. That's all for now, though.

Edit:
The way I've set this up might seem strange. Why go to the trouble of upgrading someone from the dorm when you could simply sell a single room as a dorm room? This is because I'm already looking forward to implementation. I don't anticipate hostel management IT systems to have the ability to do this. Instead I envision hostel management IT systems linking bed inventory directly to what is offered online, and thus for us to offer beds at the dorm rate, there must be beds available in the dorms on our system. Additionally, rather than being handled directly by the IT systems, I envision a clerk/manager manually intervening in the system and upgrading a booking. This person might follow a simple set of decision rules compiled from analysis of past data in order to make their decisions. If this strategy proved to be profitable, then it's integration into IT systems might occur.

Monday, June 8, 2009

Yield Management in Hostels?

In my recent travels in Europe I have again had significant exposure to the Hosteling Industry. As readers of this blog will know, we can't help but seeing Operations Research or opportunities in our daily lives. Sure enough we find ourselves analyzing our surroundings and considering the pricing structures of our hostels. In this article I hope to begin an exploration of pricing strategies in the hostel industry that I will continue after I have collected some of your thoughts and more of my own.

The Hostel industry has been rapidly developing throughout the world. According to Wikipedia, youth hostels had their humble origins in German Jugendherberge (1912), non-profit hostels for youths by youths. Fast forward to today and you can witness the evolution to profit-maximizing corporate hostels sometimes exceeding 500 beds.

That said, sophistication in the industry seems to be developing more slowly. In particular, possibly due to it's origins, there is significant resistance to profit-maximizing activity like yield management. I also believe that there is a growing suite of hostel management IT systems with some direct interfacing with booking websites. I can't claim to be an inside expert in the industry, though we did have a nice informal chat with the manager of a small-to-medium-sized non-profit hostel over beers in Munich.

Youth hostels face a problem that is similar in some ways, but different in others to that faced by traditional hotels. Apart from the obvious similarity of product, the primary similarity is that both face an expiring good that is booked ahead of time and cannot be stored.

Hostels, however, do not have business customers. Traditional revenue optimization approaches for hotels centre around price discrimination. With leisure customers and business customers that can be separated by booking time, hotels can sell rooms early at a discount to money-saving leisure customers and sell the remainder later to late-booking, price-insensitive business customers. Hotels can sell some rooms to leisure customers who would otherwise have gone to the competition had they been charged full price, and hotels can then later sell the remaining rooms at a higher price to business customers who would otherwise have only paid the flat rate that leisure customers pay. Hostels on the other hand face an exclusive stream of budget-sensitive travellers. The differentiation achieved by time of booking is thus only a question of how far the customer plans ahead and may say little about their willingness to pay.

Hostels have a wider range of product. I'm not an expert in the hospitality industry, so maybe I can ask our readers to confirm this, but I believe your typical hotel offers simply twin, triple, double, queen, and king rooms. The Meininger City Hostel and Hotel in Munich, Germany for example offers 9 distinct products on hostelworld.com: Single Private Ensuite, Twin Private Ensuite, 3 Bed Private Ensuite, 4 Bed Private Ensuite, 5 Bed Private Ensuite, 6 Bed Private Ensuite, 6 Bed Mixed Dorm Ensuite, 6 Bed Female Dorm Ensuite, 14 Bed Mixed Dorm Ensuite. Something that bears noting is that for the most part these products can be ranked such that any customer will unconditionally prefer one over those below it. For the most part, no customer would prefer to sleep in a 14 Bed Mixed Dorm when they could be in a 6 Bed.

Other factors relevant to the question of YM in hostels: I estimate that the majority of hostel stays are booked through internet booking websites, with the majority of those coming from hostelworld.com. The majority of these bookings are thus made after some moderate price comparison making the market fairly competitive. Many of these bookings will also be made factoring in reviews of the hostel. Sometimes hundreds of website users will have given the hostel a rating for things like security and cleanliness.

The lack of business customers does not mean that hostel customers cannot be segmented. I propose that hostels face two main types of customers. One group comprises the shoestring customers, willing to do anything to save a dollar (or a euro or a pound, etc.). The other group is more differentiating, willing to pay slightly more for a smaller dorm. I'm still working out the significance of this for myself.

I believe there is an opportunity there. Some initial research based on my own experience and some creative use of hostelworld shows that hostels often fill from the bottom up. That is that the largest dorms with the cheapest beds are the first to fill up, and the smaller rooms frequently go empty during the week. This may be a sign that the supply of hostel beds does not match demand. This may show that there are more small dorms in the market than desired and fewer large dorms.

I welcome any comments on the topic. Is there a business opportunity here, or is it just academic? Is the current state of IT and sophistication in hosteling sufficient to work on elementary yield management? Most hostels have a Friday-Saturday price, and everyone in Munich has a low season, high season, and Oktoberfest price, but could we go further?

Tuesday, November 4, 2008

ORdinary Spreadsheets and ORdnances

The July-August 2008 Interfaces Journal ran a theme of "The Use of Spreadsheet Software in the Application of Management Science and Operations Research". In their article from that issue, "A Spreadsheet Implementation of an Ammunition Requirements Planning Model for the Canadian Army", Hurley and Balez describe a successful spreadsheet model for planning training ammunition expenditures.

Ammunition is expended in training courses in a highly uncertain environment. Course registration rates, failure rates (before completing the entire course), and other uncertainties make it difficult to accurately forecast ammunition consumption. Planners must choose a course portfolio that will not result in an ammunition shortage. Of course, to minimize the chances of running out of ammo, planners to date had been planning to the maximum expenditure per course. The consequence of this was that the program came repeatedly under budget, 38.7% in 2002-03. This is far from ideal when attempting to allocate scare resources. Naturally this was an opportunity to apply risk management principles in order to either request a smaller budget to accomplish the same goals or to do more with the same budget.

The solution took the form of a spreadsheet tool. The Excel spreadsheet combined with Visual Basic for Applications (VBA) provided an easy and inuitive interface for planners to interact with the risk model. As a result, in 2004-05 the program was only 3.1% under budget. I will not go into too many more details as they are available in the article, but I had two interesting thoughts:

[1] A big advantage of using spreadsheets is the familiarity most managers have with it. Leveraging this, the team built a simple spreadsheet simulation to demonstrate the portfolio effect of running several courses. With repeated "F9-Simulations" (my term) they were able to demonstrate that while 10 course sections will never use 10 times the maximum (as presently budgeted), it is actually reliably much less than this. Moving up a level and using @Risk to run 10,000 simulations they were able to convincingly demonstrate the concept.
We cannot overemphasize the value of this type of spreadsheet demonstration in selling the potential of an OR model.
Interestingly enough their experience differs from my own. I tried to convince an utlrasound department supervisor that if average-45-minute appointments of uncertain lengths are booked every 45 minutes, her technologists would reliably work overtime. To do this I built a simple spreadsheet simulation, but it was totally lost on her. This is not meant as a knock against this approach, but rather to emphasize the importance of manager familiarity with spreadsheets. My ultrasound supervisor as a senior medical radiation technologist thinks differently from a rising Canadian Colonel.

[2] When selecting a portfolio of courses to fit the approved budget (less than requested), the Army chose to manually optimize using the tool rather than accept a priority-optimized result from a linear program. This perplexed the authours and I think they wrongly blamed themselves for a failure to achieve buy-in. It is my experience that when dealing with problems of a magnitude that an individual can wrap their heads around, clients prefer to leverage their intuition and optimize by hand. As OR practitioners we may not trust the client to acheive a truly optimal result, but as a client they do not trust a model to capture all of the nuances they know inuitively and the answer, of course, is somewhere in between.

The idea of doing OR with Excel probably wasn't what got you started in the field, but if you like seeing results it might just keep you in it.

Hurley, W.J., Balez, Mathieu. 2008. A Spreadsheet Implementation of an Ammunition Requirements Planning Model for the Canadian Army. Interfaces 38(4) 271-280

Wednesday, June 25, 2008

Decision Making Model on Stroke Prevention: Warfarin or not

An interesting talk I attended at the CORS 2008 conference in Quebec City was by Beste Kucukyazici from the Faculty of Management of McGill University. The topic of the talk was “Designing Antithrombotic Therapy for Stroke Prevention in Atrial Fibrillation”.

Beste Kucukyazici showed the study of stroke patient data to see if a decision model could be derived to systematically decide on the commencing of warfarin treatment for stroke patient and its intensity. Now my question is: will OR decision models take a bigger and bigger foothold in the future of medical arena as we start to gather more useful patient data in well-planned studies? Medical doctors tend to argue that each patient has a different case, and need to be examined on an individual basis. However, if a model such as Kucukyazici’s can prove the accuracy of its decision given real patient data, then it would probably start to weaken the doctor’s argument and favour a more systematic approach. At least, such models might help reduce the complexity of doctor’s decision making process, or even reduce chances for human errors in diagnosis.

Atrial fibrillation, which is a common arrhythmia particularly common among the elderly, is one of the major independent risk factors of stroke. Several randomized control trials have shown that long-term antithrombotic therapy with warfarin significantly reduces the risk of stroke, however, it also increases the risk of suffering a major bleed. Given the potential benefits and risks of warfarin treatment, the decisions that need to be made by the clinicians are two-fold: (i) whether to start the therapy, and (ii) the intensity of warfarin use. The objective of this study is to develop an analytical framework for designing the optimal antithrombotic therapy with a patient-centered approach. The approach seeks to create a rational framework for evaluating these complex medical decisions by incorporation of complex probabilistic data into informed decision making, the identification of factors influencing such decisions and permitting explicit quantitative comparison of the benefits and risks of different therapies.

Thursday, April 17, 2008

Making Decisions at Procter & Gamble: O.R.

Procter & Gamble (P&G), a $76 billion dollar company in annual sales, with 138,000 employees, and operating in over 80 countries, relies heavily on operations research for answer these questions and making important business decisions, such as:


  • which brand should be used for new products

  • how to choose suppliers to procure and source materials

  • how to use forecasting to deal with the factors impacting international trade and finance

  • how much inventory to store and where

  • how to keep and attract workforce talents for the company

It is obvious that OR applications in businesses can make a company very powerful, but it takes the OR talents who can talk business to do it. To quote Brenda Dietrich, an IBM fellow at IBM’s Watson Research Center:



There’s a gap between the math professionals and the nonmath executives in
many companies. The companies who have people who can walk into a business meeting and tell executives how to use OR tools are the ones who’ve got the edge. Deployment is no longer done just by the math people; analytics has become much more usable by a broader set of people within an organization.


Click here to view the full article.