Showing posts with label worker productivity. Show all posts
Showing posts with label worker productivity. Show all posts

Wednesday, September 1, 2010

What motivates us the most

First let me make clear that I am talking about the motivation in workplace. In personal life it's easy - in first half of our life it's the Sex, in second half it's the Comfort. (So to speak with tongue in cheek)

But the workplace motivation is more intriguing. And that is the area that every OR specialist should always keep in the forefront of their mind - the questions and aspects of human motivation. Here's an excellent animated video derived from the talk of one Dan Pink at RSA. Seems that Mr. Pink also excels in self-motivation, since this lecture is a small masterpiece.
True, these research findings are popping up here and there for the last two decades, at least, and lots of companies are adopting some of those principles, however this short video sums it up in excellent concise way. Enjoy!



However, I personally think that all these findings are missing some essential qualifications. I thinks that it reflects the motivation of people in developed countries, where there is no hunger and war is something nobody really remembers.
To echo the words of Mika Waltari in his book Egyptian Sinuhe, where he describes one lucky country he travels through, "...and the people who knew neither hunger no war, were already in middle age...".
I wonder, how the same research would turned out in war torn Angola, or Iraq.
I suspect that this type of "Make the world a better place" altruism grows best in economically nutritious Petri dish - relatively wealthy society. But what do I know about the poor countries. Maybe they would surprise us the most. The world is changing after all. It's the Internet age now.

One observation I made about the phenomenon of people working in their free time for free. (Linux developers, etc.) First I would liken it to simple hobby-ism. And I think that it indeed has the roots in hobbies. Everybody at some time in their life likes to build some "model airplane" and see it fly. But, and here comes my observation, they would like more to see it soar, than just fly. In other words, people don't mind to work for free on somebody's else project (i.e. Linux), but they prefer to jump on winning bandwagon. The likelihood of overall impact (let's even say world wide impact) is a specific motivation on its own.

It's the Internet age now.

Tuesday, April 27, 2010

Security Screening: Bottleneck Analysis

Earlier Dawen wrote an article about her recent experience in security screening at Gatwick Airport. I thought this was an opportunity to demonstrate a simple process analysis tool which could be considered a part of Operations Research: Bottleneck Analysis.

At the airport, servers in the two-step security check process were un-pooled and thus dedicated to one another. By this, I mean that a security system with four staff checking boarding passes (step 1) and six teams at x-ray machines (step 2) were actually functioning as four separate units rather than as a team. Each unit had a boarding pass checker, two of the units had a single x-ray machine and the other two units had two x-ray machines. The consequence of this was that the one-to-one units overwhelmed their x-ray teams, forcing them to stop checking boarding passes and remaining idle. The one-to-two units were starved of passengers as the boarding pass checking could not keep up, resulting in idle x-ray machines.

We know that this configuration is costing them capacity. A very interesting question is: How much?

A Bottleneck Analysis is a simple tool for determining a system's maximum potential throughput. It says nothing about total processing time or the amount of passengers waiting in the system, but it does determine the rate at which screenings can be completed. Think of it as emptying a wine bottle upside down. Whether it's a half full bottle of molasses or a full bottle of wine, the maximum rate of flow is determined by the width of the neck (the bottleneck!). The maximum throughput rate of a system is equal to the throughput rate of its bottleneck.

The throughput of the current system is the limited by the bottleneck in each unit, each sub-system. In the case of the one-to-one units we know this is the x-ray machine, as they are unable to keep up with supply from upstream and are thus limiting throughput. In the case of the one-to-two units we know it is the boarding pass checker as the x-ray machines are waiting idly for new passengers and are thus limited. It follows that the maximum throughput for the combined system is two times the throughput of a single boarding pass checker plus two times the throughput of a single x-ray machine.

The natural reconfiguration that Dawen alludes to her in her article is one where the resources are pooled and the queues are merged. Rather than having two x-ray machines dedicated to a single boarding pass checker, passengers completing step 1 are directed to the x-ray machine with the shortest queue. In this way an x-ray machine is only idle if all four boarding pass checkers are incapable of supplying it a passenger and a boarding pass checker is only idle if all six x-ray machines are overwhelmed.

What is the throughput of this reconfigured system? The throughput is equal to the bottleneck of the system. This is either the four boarding pass checkers as a team if they are incapable of keeping the x-rays busy or the x-ray machines as a group because they are unable to keep up with the checkers. The bottleneck and thus maximum throughput is either equal to four times the throughput of a boarding pass checker (step 1) or six times the throughput of an x-ray machine (step 2), whichever is smaller.

Returning to the exam question, how much capacity is this miss-configuration costing them? At this point we need must resort to some mathematical notation or else words will get the better of us.

Readers uninterested in the mathematics may want to skip to the conclusion.

Let x be the throughput rate of an x-ray machine.
Let b be the throughput rate of a boarding pass checker.

The maximum throughput of the as-is system is thus 2x + 2b (see earlier).
If step 1 is the bottleneck in the reconfigured system then the max throughput is 4b.
If step 2 is the bottleneck of the reconfigured system then the max throughput is 6x.

If 4b <> 6x then step 2 is the bottleneck.

If we were managers working for the British Airport Authority (BAA) at Gatwick Airport our work would essentially be done. We could simply drop in our known values for b and x and reach our conclusion. For this article, though, we don't have the luxury of access to that information.

Returning to the exam question again, how can we determine what the cost of this miss-configuration is without knowing b or x?

We will employ a typical academic strategy:
Let b = αx or equivalently b/x = α.

If 4b <> 1.5 then the throughput of the new system is 6x.

The throughput of the as-is system is 2b + 2x = 2 α x + 2x.

The fraction of realized potential capacity in the as-is system is the throughput of the as-is system divided by the potential throughput of the reconfigured system.

If α < x =" 1/2"> 1.5 then it is (2 α x + 2 x) / 6x = 1/3 + α/3

What are the possible values of α? We know α is at least 1 because otherwise the x-ray machines in the one-to-one systems would not be overwhelmed by a more productive boarding pass checker. We know α is less than 2 or else the x-ray machines in the one-to-two systems would not have been idle.

We know have a mathematical expression for the efficiency of the current system:

f(α) = 1/2 + 1/(2 α) where 1 <= α <= 1.5 f(α) = 1/3 + α /3 where 1.5 <= α <= 2 But what does this look like?

Depending on the relative effectiveness of boarding pass checking and the x-ray machines, the current efficiency is as follows:


If α is 1 or 2, then the as-is system is at peak efficiency. If α is 1.5 we are at our worst case scenario and efficiency is 83.3% of optimal.

Conclusion

Based on the graph above, depending on the relative effectiveness of the boarding pass screeners and the x-ray machines (unknown), the system is running at between 83.3% and 100% efficiency. The most likely values is somewhere in the middle, so there is a very good chance that the configuration of the security system is costing them 10% of possible capacity. To rephrase that, a reconfiguration could increase capacity by as much as 20%, but probably around 11%. In the worst case a reconfiguration could allow for the reallocation of an entire x-ray team yielding significant savings.

As stated previously, a bottleneck analysis will determine the maximum throughput rate, but it says nothing about the time to process a passenger or the number of passengers in the system at any one time. We now know that this miss-configuration is costing them about 10% capacity, but there are other costs currently hidden to us. What is the customer experience currently like and how could it improve? Is the current system causing unnecessary long waiting times for some unlucky customers? Definitely. More advanced methods like Queuing Theory and Simulation will be necessary to answer that question, both tools firmly in the toolbox of Operations Research practitioners.




Related articles:
OR not at work: Gatwick Airport security screening (an observation and process map of the inefficiency)
Security Screening: Discrete Event Simulation with Arena (a quantification of the inefficiency through simulation)

Monday, November 3, 2008

Computer Age Workers Suffer Digital Fatigue - Can OR Help?

The October/November Issue of Scientific American Mind magazine highlights the increasing digital fatigue we are facing as a result of always being plugged into technology.

According to a study conducted by the article's authors, our neural circuitry is actually rewired as we become more computer savvy. Internet-naive subjects, after only five consecutive days of internet use (one hour/day), had already (unconsciously, of course) rewired their brains to match those of the computer-savvy subjects in the study.

While the prospect of adapting our brains to optimize our use of the internet may sound exciting, the authors warn that the computer age has plunged us into a state of "continuous partial attention" - which they describe as keeping tabs on everything, while never truly focusing on anything.

This can result, according to the authors, in a state of "techno-brain burnout", where people place their brain in a heightened state of stress by paying continuous partial attention to anything and everything. Because our brains were not built to sustain this level of monitoring for such extended periods of time, this "techno-brain burnout" is threatening to become an epidemic.

While these heightened stress levels can have short-term productivity benefits, they are proving to be significant hindrances to medium-long term productivity, due to worker fatigue and inability to concentrate.

So where does OR come in? Well, let's review the characteristics of this vexing problem to the computer age, with workers who:
  • Have too much to do, too little time
  • Are overwhelmed by incoming stimuli
  • Fatigued and drained
Perhaps an old school approach - from the early days of scientific management - could be the right prescription.

Our old friend Frank Gilbreth, father of motion study, on his second day on the job as a bricklayer, questioned why he was being taught several different methods for laying bricks. So Frank developed motion and fatigue study, and created a process for laying bricks that was vastly more efficient than the processes currently in place.

In fact, Frank's new method increased productivity by nearly 200%, while simultaneously reducing worker fatigue. Here's a nice two-minute overview of Frank's bricklaying study from YouTube - a nice refresher course if it's been awhile:



It seems that computer age workers could greatly benefit from motion study analysis - and who better to deliver it than OR practitioners?

If you walked into a factory, and saw everyone on the assembly line improvising and doing their job any way they pleased, without any knowledge of best practices or recommended techniques, wouldn't you be stunned? Yet to this point in time, this is how the computer age workforce operates.

Time management and productivity experts have been at the forefront of efforts to tackle these problems, but their recommendations are usually fairly general, and without quantification. While advice such as "don't check email first thing in the morning" may indeed be worth practicing, eventually, we should be able to help guide specific individuals and workers to their optimal level of productivity.

And if that isn't ripe for OR, I don't know what is.