Friday, December 26, 2008
Sunday, December 21, 2008
In practice, IT always needs an "end state" to the requirements of a project in order to start the development and testing process. However, in O.R. there is never a true "end state", since O.R. is constantly changing to find the optimal model. By nature, this is a cause of conflict and frustration between O.R. and IT.Am I being silly or is this a prime candidate for the well-known agile development by iteration? Wait, isn't Google doing this constantly with their beta release of products?
Tuesday, November 4, 2008
Ammunition is expended in training courses in a highly uncertain environment. Course registration rates, failure rates (before completing the entire course), and other uncertainties make it difficult to accurately forecast ammunition consumption. Planners must choose a course portfolio that will not result in an ammunition shortage. Of course, to minimize the chances of running out of ammo, planners to date had been planning to the maximum expenditure per course. The consequence of this was that the program came repeatedly under budget, 38.7% in 2002-03. This is far from ideal when attempting to allocate scare resources. Naturally this was an opportunity to apply risk management principles in order to either request a smaller budget to accomplish the same goals or to do more with the same budget.
The solution took the form of a spreadsheet tool. The Excel spreadsheet combined with Visual Basic for Applications (VBA) provided an easy and inuitive interface for planners to interact with the risk model. As a result, in 2004-05 the program was only 3.1% under budget. I will not go into too many more details as they are available in the article, but I had two interesting thoughts:
 A big advantage of using spreadsheets is the familiarity most managers have with it. Leveraging this, the team built a simple spreadsheet simulation to demonstrate the portfolio effect of running several courses. With repeated "F9-Simulations" (my term) they were able to demonstrate that while 10 course sections will never use 10 times the maximum (as presently budgeted), it is actually reliably much less than this. Moving up a level and using @Risk to run 10,000 simulations they were able to convincingly demonstrate the concept.
We cannot overemphasize the value of this type of spreadsheet demonstration in selling the potential of an OR model.Interestingly enough their experience differs from my own. I tried to convince an utlrasound department supervisor that if average-45-minute appointments of uncertain lengths are booked every 45 minutes, her technologists would reliably work overtime. To do this I built a simple spreadsheet simulation, but it was totally lost on her. This is not meant as a knock against this approach, but rather to emphasize the importance of manager familiarity with spreadsheets. My ultrasound supervisor as a senior medical radiation technologist thinks differently from a rising Canadian Colonel.
 When selecting a portfolio of courses to fit the approved budget (less than requested), the Army chose to manually optimize using the tool rather than accept a priority-optimized result from a linear program. This perplexed the authours and I think they wrongly blamed themselves for a failure to achieve buy-in. It is my experience that when dealing with problems of a magnitude that an individual can wrap their heads around, clients prefer to leverage their intuition and optimize by hand. As OR practitioners we may not trust the client to acheive a truly optimal result, but as a client they do not trust a model to capture all of the nuances they know inuitively and the answer, of course, is somewhere in between.
The idea of doing OR with Excel probably wasn't what got you started in the field, but if you like seeing results it might just keep you in it.
Hurley, W.J., Balez, Mathieu. 2008. A Spreadsheet Implementation of an Ammunition Requirements Planning Model for the Canadian Army. Interfaces 38(4) 271-280
Monday, November 3, 2008
According to a study conducted by the article's authors, our neural circuitry is actually rewired as we become more computer savvy. Internet-naive subjects, after only five consecutive days of internet use (one hour/day), had already (unconsciously, of course) rewired their brains to match those of the computer-savvy subjects in the study.
While the prospect of adapting our brains to optimize our use of the internet may sound exciting, the authors warn that the computer age has plunged us into a state of "continuous partial attention" - which they describe as keeping tabs on everything, while never truly focusing on anything.
This can result, according to the authors, in a state of "techno-brain burnout", where people place their brain in a heightened state of stress by paying continuous partial attention to anything and everything. Because our brains were not built to sustain this level of monitoring for such extended periods of time, this "techno-brain burnout" is threatening to become an epidemic.
While these heightened stress levels can have short-term productivity benefits, they are proving to be significant hindrances to medium-long term productivity, due to worker fatigue and inability to concentrate.
So where does OR come in? Well, let's review the characteristics of this vexing problem to the computer age, with workers who:
- Have too much to do, too little time
- Are overwhelmed by incoming stimuli
- Fatigued and drained
Our old friend Frank Gilbreth, father of motion study, on his second day on the job as a bricklayer, questioned why he was being taught several different methods for laying bricks. So Frank developed motion and fatigue study, and created a process for laying bricks that was vastly more efficient than the processes currently in place.
In fact, Frank's new method increased productivity by nearly 200%, while simultaneously reducing worker fatigue. Here's a nice two-minute overview of Frank's bricklaying study from YouTube - a nice refresher course if it's been awhile:
It seems that computer age workers could greatly benefit from motion study analysis - and who better to deliver it than OR practitioners?
If you walked into a factory, and saw everyone on the assembly line improvising and doing their job any way they pleased, without any knowledge of best practices or recommended techniques, wouldn't you be stunned? Yet to this point in time, this is how the computer age workforce operates.
Time management and productivity experts have been at the forefront of efforts to tackle these problems, but their recommendations are usually fairly general, and without quantification. While advice such as "don't check email first thing in the morning" may indeed be worth practicing, eventually, we should be able to help guide specific individuals and workers to their optimal level of productivity.
And if that isn't ripe for OR, I don't know what is.
Tuesday, October 28, 2008
Beginning with the worst example I saw in my research, we look at the Nova Scotia Department of Health Website. Waiting times are reported by authority and by facility, important data for individuals seeking to balance transportation with access. However, it's how the wait times are measured that worries me the most. Waiting time is defined as the number of calendar days from the day the request arrives to the next available day with three open appointments. I have found that this is the traditional manner in which department supervisors like to define waiting lists, but at a management level it's embarrassingly simplistic. At the time of writing, the wait time at Dartmouth General Hospital for a CT scan is 86 days. I guarantee you that not every patient is waiting 86 days for an appointment. Not even close. Neither is the average 86 days, nor is the median 86 days. The question of urgency requires that we discuss our level of access for varying urgencies. Additionally, there's the fact that 3 available appointments 86 days from now says nothing about what day my schedule and the hospital's schedule will allow for an appointment. If there's that much wrong with this measurement method, then why do they do it? The simple fact is that it is very easy to implement. In healthcare where good data can be oh so lacking, this system of measuring "waiting lists" is cheap and easy to implement. No patient data is required, one needs simply to call up the area supervisor or a booking clerk and ask for the information. So hats off to Nova Scotia for doing something rather than nothing, which indeed is better than some of the provinces, but there's much work to be done.
Next, we'll look at the Manitoba Health Wait Time Information website. Again we have data reported by health authority and facility. Here we see the "Estimated Maximum Wait Time" as measured in weeks. The site says, "Diagnostic wait times are reported as estimated maximum wait times rather than averages or medians. In most cases patients typically wait much less than the reported wait time; very few patients may wait longer." If this is true, and it is, then this is pretty useless information, isn't it? Indeed I am reconsidering my accusation of Nova Scotia being the worst of the lot. If this information represents something like the 90th or 95th percentile then I apologize because, as I discuss later, this is a decent figure to report. However, it is not explicitly described as such.
Heading west to Alberta, we visit the Alberta Waitlist Registry. Here we can essentially see the waiting time distribution of most patients scanned in MRI or CT accross the province in the last 90 days. The site reports the "median" (50th) and "majority" (90th) percentiles of waiting time. It then follows to report the % of patients served in <3>18 months. What is lacking in this data is two key elements. For one, both day patients and in patients are included in this data. This means that both the patient waiting for months to get an MRI on their knee and the patient waiting for hours to get one on their head are treated as equal. Patients admitted to the hospital and outpatients experience waiting times on time scales of different orders of magintude and should not be considered together. The percentage of patients seen in less than 3 weeks must therefore include many inpatients and thus overstates their true level of service. The other key element is the notion of priority. Once again, for an individual in the population looking for information about how long they might wait or for a manager/politician looking to quantify what the level-of-care consequences are of current access levels, this data isn't very useful because it lacks priority. If urgent patients are being served at the median waiting time, this shows significant problems in the system, but without data reported by urgency, we can only guess that this is being done well. As someone who has seen it from the inside, I would NOT be confident that it is.
Now I return to what westerners would rather not admit is the heart of Canada, Ontario and the Ontario Ministry of Health and Long-Term Care website. This site measures wait times in terms of the time between request and completion. It reports the 90th percentile wait times in days by facility and provincially and calls it the "point at which 9 out of 10 patients have had their exam." The data excludes inpatients and urgent outpatients scanned the same-day, addressing a critical issue I had with the Alberta data. Priorities are lacking, but with a little digging you can find the province's targets by priority, so there is, perhaps, hope. Reporting the 90th percentile seems like a good practice to me. With the funky distributions we seen when measuring waiting times, averages are certainly of no use. Additionally the median isn't of great interest because this is not an indication what any one individual's experience will be. This leaves the 90th percentile which expresses what might be called a "reasonable worst case scenario".
Finally I turn to the organization whose explicit business is communicating complex issues with the public, the Canadian Broadcasting Corporation. Their CBC News Interactive Map from November 2006 assigned letter grades from A-F converted from %ages of the population that were treated within benchmark. Who knows if this is glossing over the lack of priority data or if it includes the %age that met the benchmark for each priority, but it's a start. Letter grades given were: BC N/A, AB D, SK N/A, MN F, ON C, QC N/A, NB N/A, NS N/A, PEI F, NF N/A. So with over half not reporting, there wasn't much they could do.
So what have we learned from this survey? Well we have certainly learned that the writer has a love of detail and is dissatisfied with each province that omits any. This is, as discusses in the introduction, natural for an operations research practitioner. If I were advising someone on the development of diagnostic access-to-care metrics I would tell them this: (1) Focus on the patient experience. Averages and medians don't tell me what my experience might be. 90th percentiles do a much better job of this. (2) Focus on the context. Waiting times in the context of an inpatient are in a different universe than those of an outpatient and should be treated as such. Waiting times of urgent cases vs. routine cases bear different significance and should be similarly separated.
Monday, October 20, 2008
On the west coast of Canada, two groups within the CIHR (Canadian Institutes of Health Research) team in Operations Research to improve cancer care, are making impacts in the BC Cancer Agency. They would like to call out to the OR community to help them join in their efforts of establishing an online community to share resources among the OR people working in cancer care.
The British Columbia Cancer Agency (BCCA) is the sole cancer treatment provider for the entire province. The problem to be resolved at the facility is a lack of space (examination rooms and space for physicians to dictate) at the ambulatory care unit (ACU). However, again, the process flow related data was not available. The BCCA OR team, Pablo Santibáñez and Vincent Chow mapped the patient flow process, and then manually collected time and motion data to track the movement of patients and physicians. The data was used for a simulation model to evaluate different what-if scenarios: different appointment scheduling methods and room allocation methods. As a result, the team was able to achieve up to 70% reduction in patient appointment wait time with minimum impact on the clinical duration. They were also able to free up to 26% of the exam rooms to accommodate for other physician duties.
On the academic front, a Ph.D student at the Sauder School of Business in the University of BC, Antoine Sauré, has been helping BCCA in another department: Radiation Therapy treatment units. This research is motivated by the adverse effect of delays on patients’ health such as physical distress and deterioration of the quality of life, and the inefficiencies in the use of expensive resources. Rather than maximizing revenue, the main purpose of our work is to identify good scheduling policies for dynamically allocating available treatment capacity to incoming demand so as to achieve wait time targets in a cost-effective manner. Currently, the number of urgent treatments that would start after the recommended target time is significantly below the target. This goal involves the development of a Markov Decision Process model and simulation models for identifying and testing different types of policies. This is still an on-going research. No results are currently available. However, the team is ready to test algorithms for determining the optimal scheduling policy based on an affine approximation of the value function and a column generation algorithm to tackle the otherwise very large MDP problem.
The papers for the above two projects are available online at http://www.orincancercare.org/cihrteam/ if you wish to obtain more information.
Credits: These 3 talks were given at the INFORMS 2008 conference in Washington DC. The track sessions were TB21, TB34, and TB34. Speakers are Ali Vahit Esensoy, University of Toronto, Canada; Pablo Santibanez, Operations Research Scientist, British Columbia Cancer Agency, Canada; and Antoine Saure, PhD Student, Sauder School of Business, University of British Columbia, Canada. The talks were titled "Evaluation of Ambulance Transfers into UCCs to Improve Ambulance Availability & Reduce Offload Delay", "Reducing Wait Times & Improving Resource Utilization at the BC Cancer Agency’s Ambulatory Care Unit", and "Radiation Therapy Treatment Scheduling".
Friday, October 17, 2008
From the 70’s to now, we have been on the roller coaster ride with oil prices. In the 1970’s, with the belief of an energy shortage, the National Energy Act was signed to protect the US from the over consumption of oil. Tax credits were handed out to engage people to convert gasoline cars to burning natural gas. The go-ahead for ethanol was also around the same time. However, in 1986, oil prices crashed down, which caused policies and investors who relied on conventional wisdom to withdraw. Then came 2004, when oil prices went up unexpectedly, and the government had to update the fuel economy mandate to adjust to the rise. Similar rise and fall goes for the natural gas industry as well. The ups and downs of natural gas influenced the liquefied natural gas (LNG) terminals development, the talks of an Alaska pipeline to serve the lower 48 states, the energy ties with the northern neighbor, Canada, and much more. In the US, the prices of oil and gas largely impact the country. When energy prices go through the roof, “all bets are off”, says Sharp, just like the stock market crash that the country is now deeply suffering from.
However, being open and frank, Sharp thinks analysis is powerful and important, but it is not the means to all ends. “It will help us organize and attack so many unknowns and uncertainties”, but persistency to get the ideas across to politicians is crucial. Responding to an audience’s question, Sharp stresses the importance of identifying institutional connections. In the world of politics, the voice of an individual scientist may have some impact, but a collective agreement of a group of politicians will have far greater reach. “The most difficult thing in the congress is to get people to agree on something”, says Sharp. The scientist’s ability to communicate simply the complex things and the level of confidence and trust s/he can offer to the politician is golden. Politicians need to find out who and what to trust, especially when reports are being thrown around to counter other reports. It is too tempting to just pick the easiest path or whatever supports one’s self interest.
It is always a pity to see brilliant ideas sitting on the book shelves because of ineffective communication. Politicians are typically the most powerful figures in a country and can make the biggest impact. If Operations Researchers want to do good with good OR, then as Sharp suggests, identifying institutional connections will be the key.
Credits: The talk was given at the INFORMS 2008 conference in Washington DC as a plenary in the series of Doing Good with Good OR).
Dr. Steven Cohen, Director at the Center for Financing, Access & Cost Trends at the Agency for Healthcare Research & Quality (AHRQ) in the Department of Health & Human Services, USA, shared with the audience the various topics that the Center has been working on.
The Center employs statisticians, economists, social scientists and clinicians to achieve their motto, “advancing excellence in healthcare”. They also monitor and analyze the current trends in healthcare, ranging from cost, to coverage, to access, and to quality of care. Some figures that were shared were:
- Every $1 out of $6 of the US GDP (or 16%) goes to healthcare – largest component of federal and State budget
- Other western European nations spend less than 10% on healthcare.
- In 2006, $2.1 trillion was spent on healthcare. That is $7,026 per capita, which is a 6.7% increase over 2005.
- At this kind of growth, it is projected that $4.1 trillion will be spent in healthcare in 2016 (1/5 of GDP).
- 5% of the US population account for 50% of the healthcare expenditures.
- Prescribed medication expenditures almost doubled from 12% to 21% in ~10 years.
- The largest expenditure is on inpatients (over 30%).
- The second largest is on ambulatory care (over 20%).
- Chronic diseases (heart, cancer, trauma-related disorders, mental health, and pulmonary) account for a majority of the expenditures.
- Medical errors accounted for 44,000 avoidable deaths in hospitals a year.
- Americans are less healthy: 33% obesity rate and high rate of chronic diseases.
AHRQ aims to provide long term system wide improvement for healthcare quality and safety. To provide policy makers with informed recommendations, surveys, simulations , forecasting models, and other OR tools are often employed to answer difficult questions. Through these methods, AHRQ is able to establish evidence and assess risks associated with alternative care choices.
AHRQ’s focus on efficient allocation of resources and structuring of care processes that meet patient care needs aids policy makers to establish the necessary high-level strategies and policies. Especially in dire times like these, issues of rationing become the center of discussion. It is AHQR’s responsibility to have the right information to help policy makers make the right trade offs.
Credits: The talk was given at the INFORMS 2008 conference in Washington DC as a plenary in the series of Doing Good with Good OR).
Tuesday, October 14, 2008
Ever since the existence of blogs, marketers have been nervous about the reputation of their products. Lucky for the IBMers, when the marketers at IBM are nervous about the client response to their product (i.e. Lotus), help is within reach from the IBM Research team. Marketers want to know: 1. What are the relevant blogs? 2. Who are the influencers? 3. What are they saying about us? 4. What are the emerging trends in the broad-topic space? Perlich’s team went about these four questions by starting with a small set of blogs identified by the marketers (the core “snowball” of blogs), then “rolling” over the “snowball” twice to discover other blogs related the core set (i.e. max 2 degrees of separation from the core). To find the authoritative sources in the snowball, Google’s Page Rank algorithm came to the rescue. Using sentiment labeling, the team was able to say whether the overall message of a blog was positive or negative. Then to allow useful interaction with and leveraging of the data by the users (i.e. marketers), a visual representation was rendered to show the general trend in the blogosphere about the product in question (see image). At which stage, marketers are able to dig into each blog that is identified as positive or negative, and the problem would seem much more manageable.
Das’ talk fits in rather nicely with Perlich’s, as he examines the trend of blog comment activities and Wikipedia’s edit histories to try to demonstrate the eventual conversion of these information sources to a stable state. The underlying assumptions are that the more edits a Wikipedia page has, the more reliable its information is, hence the higher the quality; and the higher the quality a page is, the less likely that there will be more talks/edits on that page, because most likely what needed to be said has already been said (aka informational limit). Das obtained the data dump of all pages on Wikipedia in May 2008, and obtained all 15,000 pages (out of 13.5 million in total) that had over 1,000 edits. Using dynamic programming to model a stochastic process, Das was able to find a model for the edit rate of an aggregation of these highly edited Wikipedia pages. Then he applied the same model to blog comment activities. In both cases, the model fit extremely well to the data, and surprisingly the shape of the activity pattern over time looked very much alike between blog comment and Wikipedia page edit activities. An interesting inference made by Das was that people contribute less of their knowledge to Wikipedia pages than blogs.
This is the beauty of Operations Research. It is a horizontal plane that can cut through and be applied to many sciences and industries. Aren’t you proud to be a dynamic Numerati?
Credits: The talk was given at the INFORMS 2008 conference in Washington DC. The track session was MA12 & MB12. Speakers are Claudia Perlich, T.J. Watson IBM Research, and Sanmay Das, Assistant Professor, Rensselaer Polytechnic Institute, Department of Computer Science. The talks were titled "BANTER: Anything Interesting Going On Out There in the Blogosphere?", and "Why Wikipedia Works: Modeling the Stability of Collective Information Update Processes".
The air traffic is getting more and more congested in Europe, and a solution is needed to unite the European sky with a single set of protocols for all EU countries. EUROCONTROL is in the position to do so. It is the European Organization for the Safety of Air Navigation, and it currently has 37 member states. As a part of the modernization activities, EUROCONTROL needs to select a set of technological improvement projects from four major categories: network efficiency, airport operations, sector productivity, and safety net. For example, if considering only a subset of 5 projects out of a list of 20+ projects, each with 2 to 5 implementation options, there would be 300 possible combinations. Therefore, the question is which set of projects to select that would satisfy the various objectives and constraints of multiple stakeholders, such as the airports, the airlines, the society (environment), and more. Grushka’s team from the London Business School was asked by EUROCONTROL to provide vigorous and transparent analysis of this problem by involving all stakeholders. The team used an integrative and iterative framework to approach the problem. They used a mixed integer programming model to find the optimal combination of projects. As a result of this team’s effort, the EU may now use the same language to talk about uniting the European sky with a common understanding of the problem.
The Ford team had a very different but extremely urgent problem to solve. They needed to figure out whether 2 of their automotive interior supply production facilities should be closed down and their work outsourced to other suppliers because of unprofitability and underutilization at the plants. The number one problem was time – 8 weeks was all they had. The optimization problem they faced, however, was extremely large. The mixed integer non-linear programming model (MINLP) had around 450,000 variables and 200,000 constraints. After removing nonlinearity in the model, the mixed integer programming model (MIP) had 4.3 million variables and 1.8 million constraints. Just imagine the data gathering process and the model formulation! What a nightmare. Luckily, the team had the unconditional support from the CEO and was able to obtain a complete set of data –150,000 data points in the model. After 8 weeks of 20 hour-days, the team was able to deliver a model to test out the what-if scenarios and therefore removing the subjective decision making that is so common in most enterprises. As a result, Ford will be able to maintain 1 facility and outsource only a certain percentage of the work. It presented a saving of $50 million over five years compared to the alternative of outsourcing all the work.
The two projects were fascinating to listen to. They both showcased the importance of quantitative decision making in business. It will be a tough job to select a winner out of these two. Good luck to both teams!
Credits: The talk was given at the INFORMS 2008 conference in Washington DC. The track session was MC32. Speakers are:Yael Grushka-Cockayne, London Business School, Regent’s Park, London, United Kingdom, and Erica Klampfl, Technical Leader, Ford Research & Advanced Engineering, Systems Analytics & Env. Sciences Department. The talk was titled "Daniel H. Wagner Prize Competition: Towards a Single European Sky, and Using OR to Make Urgent Sourcing Decisions in a Distressed Supplier Environment".
Balinski describes the traditional voting/electoral system as trying to elect the Condorcet winner. Wikipedia says, “The Condorcet candidate or Condorcet winner of an election is the candidate who, when compared with every other candidate, is preferred by more voters. Informally, the Condorcet winner is the person who would win a two-candidate election against each of the other candidates”. However, such a winner may not exist because of the Condorcet’s paradox and Arrow’s impossibility theorem. Besides, doesn’t it strike people as odd that in a world where there is a lot of gray area and not just black or white, that we are basically casting a yes/no vote for the most important person in a country? Balinski argues that we could do better with the majority judgment voting process.
The process would list all electoral candidates on the ballot, and ask voters to rate each candidate as one of the following by providing a tick mark in the grid, for example: excellent, very good, good, acceptable, poor, to reject. This experiment was conducted in the 2007 Presidential election in France, and these rankings are of “common language” to all French voters, because the schools have always used them. Similar common languages include star ratings (1 to 5 stars) for movies and restaurants, and letter grading (A to F) for school grades. Summing or averaging the scores wouldn’t make sense, because it is not an interval measure, according to Balinski. Therefore, he proposes that the set of grades each candidate receives should be ordered from the best to the worst, and obtain the median grade (the 3rd of a total of 5 grades for example, or the 4th of a total of 6). If all candidates are tied in the first round, then repeat the method until the tie is broken, but each time with the median grade (the 3rd or the 4th grade) removed for each candidate’s set of grades. The final grades would become the majority grade.
It should be noted that for such a method, a common language of grades is essential, or there would not exist a collective decision. However, Balinski claims that this voting system would produce a much more reflective result of the voter’s desires. So much so that the experiment done in 3 of the Orsay’s voting bureaux in France, the French voters were able to tell who each of the 3 candidates was simply by looking at the final majority grades of the top 3 presidential candidates,.
When studies show that a third of the voters do not state a single preferred presidential candidate, one questions whether it is correct to force voters to vote for only one candidate on a ballot. And if such existing systems do not work well, shouldn’t we be inclined to change and try out new methods? After all, we are an adaptive and ever-changing society (even though the human nature is afraid of change). But hey, if it doesn’t work, we can always chuck it away. What’s there to lose?
Credits: The talk was given at the INFORMS 2008 conference in Washington DC as a keynote talk. Speaker is Michel Balinski, a distinguished, multi-award winning professor and now the Directeur de Recherche de classe exceptionnelle, Ecole Polytechnique and CNRS, Paris. The talk was titled "One-Vote, One-Value: The Majority Judgment".
Sunday, October 12, 2008
When physicians aren’t surprised of seeing patients suffering from heart attacks dying while waiting to get into a bed in ER (2006 news from an Illinois emergency room), when patients leave ER without being seen due to the frustration of long waiting hours, when ambulances have to be diverted because the ER is full, one needs to ask why. Why are the ERs always full when you need it? The number one reason for this is a lack of inpatient beds – the bottleneck of the healthcare system, which causes the inability to move patients in ER to an inpatient unit and therefore releasing beds in the ER. So, why are there not enough beds? The answer is relatively simple. Healthcare is expensive, and when governments need to stay on budget, healthcare gets cut. Healthcare also happens to be largely funded by government subsidization, so such cuts have reduced the number of hospitals from 7000 to 5000 (from 1980 to now), and the number of beds available from 435 to 269 per 100,000 persons in the US. Yet, politicians cry that there are “too many” beds. Why, you ask? Because important decision makers are looking at occupancy level (or bed utilization ratio), which for ER beds the utilization currently resides at around 66% (target is 85%). Utilization looks good, but it is the wrong measuring stick. Every OR person knows variability means extra capacity (more so than the average) is needed to deal with times of heavy demand with a reasonable level of customer satisfaction. Healthcare is one such process where demand varies greatly with the day of the week and the hour of the day. Aiming at and planning for averages is not going to work. A Green’s study showed that if New York state hospitals wanted to have a less than 10% chance of not having a bed available for up to 1 hour for an ER patient, then 58% of the hospitals would be too small. If they aim for a 5% chance of not having a bed available, then 74% hospitals would be too small; and if the aim were 1%, 90% hospitals would be too small. However, the good news is performance could be improved without increasing stuffing level at the hospitals, as Green has shown in her work.
Moving onto physician appointment and nursing care waits, these two factors both contribute to the overcrowding of ERs. When physicians are asked to treat patients in a Just-In-Time fashion, the major issue is that no one knows how to balance the physician capacity with the patient demand. When physicians have too many patients, not everybody can get seen in time, and patients cancel appointments if the wait is long. When patients cancel appointments, they also ask for another one because they still needs to be seen, so it goes back and adds to the backlog for the physicians. Study shows the shorter the backlog the higher the physician’s utilization. Green’s project of recommending the right number of patients for doctors has helped over 500 clinics and organizations to find that balance of capacity and demand.
It is widely acknowledged that there is a shortage of nurses, and that the lack of nursing care is directly correlated with increasing patient safety issues (mortality included) and patient satisfaction. California has passed a legislation to ensure a 1:6 nurse-to-patient ratio for general medical-surgical wards. It is good to see actions, but is this policy more disruptive than helpful? According to Green, varying unit sizes, the level of intensity of care for differing patient types, and the length of stay (and therefore turnover of patients) at different hospitals can all mean different optimal nurse-to-patient ratios. In larger units, the California legislation could result in overstaffing at a cost of $300,000 per year per unit, says Green. That is expensive.
Overall, healthcare certainly has a lot of problems. To name a few, there is the increasing cost, quality problems, and access problems. Operations Research is needed in this field. It could mean a matter of life and death.
Credits: The talk was given at the INFORMS 2008 conference in Washington DC. The track session was SC37. Speaker is Linda Green, Armand G. Erpf Professor, Columbia Business School. The talk was titled "Tutorial: Using Operations Research to Reduce Delays for Health Care".
Grimes took the audience on a journey the traditional BI work, where data miners take raw csv (comma separated values) files, and turn them into relational databases, which then gets displayed as fancy monitoring dashboards in analytics tools – all very pretty and organized. However, most of the data that BI deals with are “unstructured” data, where information is hiding in pictures and graphs, or in documents stuffed with text. According to Grimes, 80% of enterprise information is in “unstructured” form. To process the raw text information, Grimes says it needs to be 3-tiered: statistical/lexical, linguistic and structural. Statistical techniques help cluster and classify the text for ease of search (try ranks.nl). Syntactical analysis from linguistics helps with sentence structures to provide relational information between clusters of words (try Machinese from connexor.eu to form a sentence tree). Finally, content analysis helps to extract the right kind of data by tagging words and phrases for building relational databases and predictive models (try GATE from gate.ac.uk).
Elder’s list of top 5 data mining mistakes includes:
1. Focus on training the data
2. Rely on one technique
3. Listen (only) to data (not applying common sense to processing data)
4. Accept leaks from future
5. Believe your model is the best model (don’t be an artist and fall in love with your art)
In particular, Elder shares with the audience the biggest secret weapon of data mining: combining different techniques that do well in 1-2 categories will give much better results. See Figure 1. 5 algorithms on 6 datasets & Figure 2. Essentially every bundling method improves performance. Figure 3. Median (and Mean) Error Reduced with each Stage of Combination also illustrates the combinatorial power of methods for another example in his talk.
Data text mining is still in its early stage, and the “miners” have a lot of challenges to overcome. However, given the richness of information floating around on the internet and hiding in thick binding books in the library, data text mining could revolutionize the business intelligence field.
Credits: The talk was given at the INFORMS 2008 conference in Washington DC. The track session was SB13. Speakers are: Seth Grimes, Intelligent Enterprise, B-eye network; and John Elder, Chief Scientist, Elder Research, Inc. The talk was titled "Panel Discussion: Challenges Facing Data & Text Miners in 2008 and Beyond".
Hollywood movies have widely varying box office revenues, some much more profitable than others. Therefore, it is crucial to forecast movie demand decay patterns for movie financing, contracting, general planning purposes, etc. The forecast needs to be made long before the movie release, since planning happens much more in advance, sometimes years earlier. Most movies gain the majority of its revenue in the first 10 weeks of opening, so the model looks at the forecasting of demand decay patterns of the first 10 weeks of Hollywood movies. The use of HSX data is proven to provide more information for the revenue forecasting purposes. Virtual stock markets (VSM), the show of wisdom of crowds, are of no stranger to forecasting complicated issues ranging from election results, NBA championship winnings, to Al Gore’s 2007 Noble Prize winning. The results produced by VSM are very impressive and accurate. For example, the political VSM was said to be 75% more accurate than political polls.
Foutz, Jank and James identified 3 principle components to be used alongside the trading history of HSX: average/longevity, fast decay, and sleeper. Longevity captures the average box revenue over the lifetime of the movie where the trend is relatively smooth (a linear decreasing trend of a log transformation of the revenue figures), such as Batman Begins. Fast decay captures the movies that have great openings but quickly die out, such as Anchorman. Sleeper describes the movies that have a slow start, but with word of mouth (for example), it would pick up momentum in later weeks of the opening, such as Monster or My Big Fat Greek Wedding.
The authors tested out 5 different models of weekly revenue regression over a period of 10 weeks. Each model uses a combination (or the lack of) the three principle components and the trading histories from HSX. The results indicate that movies with higher level of trading activities on HSX at the very early stage (weeks in advance) would more likely have a higher weekend box office opening revenue. How could this finding be used for more meaningful purposes than betting with your friends? For example, theatre owners could better allocate screens and profit sharing, while movie producers could design different contracts for the slow burners than the fast ones. If you are a movie buff, maybe it’s time to get on the HSX for some trading fun instead of crying over the financial stock markets.
Credits: The talk was given at the INFORMS (Institute For Operations Research and Management Science) 2008 conference in Washington DC, in session SA68, by Natasha Z Foutz, Assistant Professor of Marketing, from the McIntire School of Commerce, University of Virginia. The title of the talk was "Forecasting Movie Demand Decay Via Functional Models of Prediction Markets".
Thursday, September 4, 2008
I think it is great that operations research is getting some publicity with The Numerati. However, there can be such a thing as a bad publicity. Is it just me or does it seem to everybody (OR folks) that this book is casting us in a rather negative light? I think the general notion is already that the numbers guys are not to be trusted (at least in certain health care places). Now this book may be saying how smart we are and all that, but with a bit of an evil undertone. Just the title itself, "how they will get my number and yours", is painting us as some kind of math hackers out to steal people's information, isn't it?
I have mixed feelings about this book, but I am curious to read it. I just hope we won't scare anybody more than now when us OR people walk down a hospital isle.
Feel free to voice you thoughts on The Numerati.
Sunday, August 31, 2008
Saturday, August 9, 2008
- The big health authorities: Fraser Health Authorities, Coastal Health Authorities, BC Cancer Agency, etc.
- and maybe some engineering firms, such as Sandwell Engineering
- data skills
- consulting & communication skills (written & verbal)
- change management
- empathy - put yourself in other's shoes to help them understand your view
Wednesday, June 25, 2008
Beste Kucukyazici showed the study of stroke patient data to see if a decision model could be derived to systematically decide on the commencing of warfarin treatment for stroke patient and its intensity. Now my question is: will OR decision models take a bigger and bigger foothold in the future of medical arena as we start to gather more useful patient data in well-planned studies? Medical doctors tend to argue that each patient has a different case, and need to be examined on an individual basis. However, if a model such as Kucukyazici’s can prove the accuracy of its decision given real patient data, then it would probably start to weaken the doctor’s argument and favour a more systematic approach. At least, such models might help reduce the complexity of doctor’s decision making process, or even reduce chances for human errors in diagnosis.
Atrial fibrillation, which is a common arrhythmia particularly common among the elderly, is one of the major independent risk factors of stroke. Several randomized control trials have shown that long-term antithrombotic therapy with warfarin significantly reduces the risk of stroke, however, it also increases the risk of suffering a major bleed. Given the potential benefits and risks of warfarin treatment, the decisions that need to be made by the clinicians are two-fold: (i) whether to start the therapy, and (ii) the intensity of warfarin use. The objective of this study is to develop an analytical framework for designing the optimal antithrombotic therapy with a patient-centered approach. The approach seeks to create a rational framework for evaluating these complex medical decisions by incorporation of complex probabilistic data into informed decision making, the identification of factors influencing such decisions and permitting explicit quantitative comparison of the benefits and risks of different therapies.
Invited audience include current and incoming Master of Management in Operations Research (MM in OR) students & alumni of the Centre for Operations Excellence (COE), Sauder School of Business, University of British Columbia.
Wondering what career path in OR you would like to choose?
Wondering how OR consulting is done?
Want to meet the guy who started AnalysisWorks
– one of the only OR consulting firms in Vancouver?
If you happen to be in Vancouver, then join us on Friday, July 18, 2:30-3:30pm at the Penthouse of the Henry Angus building on UBC campus.
Jason Goto, BASc Engineering, MSc Management Science: President
Jason Goto has consulted in a wide variety of projects involving the application of analytic data-driven methods. He has worked with major health care organizations, market research firms, manufacturers, and other private and public organizations. He specializes in the effective application of Operations Research and Management Science techniques including scenario analysis, statistics, forecasting, simulation, and optimization. (From AnalysisWorks.net)
Monday, June 2, 2008
Professor Carter is one of the Canadian leading experts in healthcare and operations research, with over 17 years of experience in OR applications in healthcare. He currently leads the Centre for Research in Healthcare Engineering, Mechanical and Industrial Engineering, University of Toronto. Click here for more information on Professor Mike Carter.
Mike has been very kind to allow me to publish his talk here on ThinkOR.org. Here are some key points to take away:
- Healthcare is North America's single largest industry; Canada spent $142 billion CDN in 2005; US spent $2 trillion.
- Canada's per-capita spending ($3,326 USD) was half of US ($6,401 USD), and this is how it's been growing:
- US & Canada are about the same in terms of quality of health care, access, efficiency , and equity (based on the Commonwealth Fund 2004 International Health Policy Survey)
- A new way of looking at the healthcare system's stakeholders (no wonder it's difficult to make decisions in a hospital):
- Challenges in healthcare system can be viewed as operations research challenges:
- Patient flow - supply chain
- Surgical wait list - better scheduling
- Infectious diseases - logistics
- Health human resources - forecasting
Mike also demonstrated the application of O.R. techniques in his own practice:
- Ontario Wait List Management
- Colorectal cancer screening
- Cancer treatment centre locations
- Health Human Resource Modelling
Thank you Mike for allowing me to write about your talk. It was delightful to see OR in action in the Canadian healthcare. We look forward to seeing the 30% potential waste of money spent in healthcare to shrink fast.
Wednesday, April 23, 2008
Traffic on the railway nearly doubled between 1970 and 2006, but its timetable had not changed, leading to commuting problems...
Restructuring increased the percentage of trains arriving within three minutes of schedule from 84.8 percent in 2006 to 87 percent in 2007, while the number of passengers increased 2 percent in the first six months of 2007.
Profit, meanwhile, rose $60 million and the changes made additional capacity available...
Thursday, April 17, 2008
- which brand should be used for new products
- how to choose suppliers to procure and source materials
- how to use forecasting to deal with the factors impacting international trade and finance
- how much inventory to store and where
- how to keep and attract workforce talents for the company
It is obvious that OR applications in businesses can make a company very powerful, but it takes the OR talents who can talk business to do it. To quote Brenda Dietrich, an IBM fellow at IBM’s Watson Research Center:
There’s a gap between the math professionals and the nonmath executives in
many companies. The companies who have people who can walk into a business meeting and tell executives how to use OR tools are the ones who’ve got the edge. Deployment is no longer done just by the math people; analytics has become much more usable by a broader set of people within an organization.
Click here to view the full article.
See how businesses like UPS and Procter & Gamble are using OR to solve their important business problems and making informed business decisions.
Click here for the video.
Wednesday, April 16, 2008
- make your theme clear and consistent
- create a headline that sets the direction for your meeting
- provide the outline
- opens & close each section with clear transition
- make it easy for your listeners to follow your story
- demonstrate enthusiasm
- wow your audience
- sell an experience
- make numbers and stats meaningful
- analogies help connect the dots for your audience
- make it visual
- paint a simple picture that doesn't overwhelm
- give 'em a show
- identify your memorable moment and build up to it
- rehearse rehearse rehearse
- "And one last thing..." give your audience an added bonus to walk away with
- a strong opening, a strong closure, and an encore with "one more thing"
- 10 hours to rehearse for a new 30-minute presentation. It may sound like a lot, but if you want Jobs-style drama, you need to know your material cold.
- A Vision: If your topic can’t be summed up in 10 words or less, it’s too broad.
- A Clear Structure: An organized speech is easier for the audience to follow.
- Visuals: Eye-catching graphics form the basis of the most compelling slides.
- Dramatic Flair: A few time-tested storytelling devices help build excitement.
Click here to view the video.
Click here to see the full article.
Warning to studio readers — two marketing professors at the Wharton School could very well put you out of a job.
Actually, Z. John Zhang and Jehoshua Eliashberg (plus a bevy of co-authors) claim that their goal is merely to augment your special talents, not replace them. But the paper they published in Management Science magazine in June called 'From Story Line to Box Office: A New Approach for Green-Lighting Movie Scripts' establishes a statistical model for analyzing screenplays and predicting whether a resulting movie will be successful at the box office. Which, if accurate, would render your silly personal judgments obsolete.
Greenlighting, or putting a screenplay into active production, relies mostly on the subjective intuition of readers and executives (plus a studio calculus derived from the budget and the past record of the film's genre and potential cast). It's a system that can produce, shall we say, spotty results. Zhang and Eliashberg hope to take some of the guesswork out of it. Their model combines textual analysis (paragraph construction, frequency and distribution of words, etc.) with structural analysis (a clear premise, a surprise ending, and the like) using 22 yes-or-no queries that are posed and then cross-referenced.
— Los Angeles Times
Tuesday, April 1, 2008
forecasting “experts” don’t do much better than novices — or, for that matter, guessing — when it comes to predicting the future.I've been taking a forecasting class, and find the topic rather fascinating. However, as in the course of my study of different forecasting methods, it seems that in a lot of cases the best forecast is the naive forecast - which is simply guessing using current values.
For more information on the study mentioned above: http://blogs.wsj.com/numbersguy/grading-the-forecasts-of-experts-182/?mod=hpp_us_blogs
Saturday, March 29, 2008
As an Operations Research professional or student, what comes to mind when someone says
- "we do cross docking"
- "our system is a pull system"
- "our system is 'just in time'"
Communication - clarification - don't assume what you know, but always confirm what you think you know is true to make sure it is actually true - that is the lesson I learned.
"What is the best time for me to have a child?"
Here, "best" refers to a trade off between emotional and physical health problems associated with having a child too late and the impact on career for having a child too early.
The answer? Mathematical models come to the rescue. A model developed by Ralph L. Keeney and Dinah A. Vernik from the Faqua School of Business in Duke University attempts to use decision analysis, one of the many operations research techniques out there, to help women decide the optimal time of bearing their first child.
In the case of a woman who does not feel that motherhood will be a significant barrier to her pursuit of a particular milestone, the model suggests attempting to conceive a first child at a younger age. Specifically, the model can calculate for any specific situation the level of anticipated negative career impact at which an individual woman may wish to postpone having a child.
The example of a 20-year-old college student illustrates the situation when a woman claims she does not want to have a child until she reaches a certain age, say 35 years old. The model suggests that, especially in cases where both family life and career are important to the woman, having a child much earlier may be a better long-term solution than waiting until she is more established in her career.
For more information on the model, please visit their website:
Friday, March 7, 2008
According to a recent report from Boston-based AMR Research Inc., companies that excel in supply-chain operations perform better in almost every financial measure of success. Where supply-chain excellence improves demand-forecast accuracy, companies have a 5% higher profit margin, 15% less inventory, up to 17% stronger “perfect order” ratings, and 35% shorter cash-to-cash cycle times than their peers. Companies with higher perfect-order performance have higher earnings per share, a better return on assets, and higher profit margins — roughly 1% higher for every three percentage-point improvement in perfect orders.
“The basis of competition for winning companies in today’s economy is supply-chain superiority,” says Kevin O’Marah, vice president of research at AMR Research. “These companies understand that value-chain performance translates to productivity and market-share leadership. They also understand that supply-chain leadership means more than just low costs and efficiency — it requires a superior ability to shape and respond to shifts in demand with innovative products and services.”
The above is a highlight of the supply chain importance in today's businesses, referenced from an article in BusinessWeek. Here's a link to the article.
Tuesday, February 26, 2008
Short and sweet, and to the point!
Supply Chain Management is "making businesses more efficient, and helping businesses create a more efficient world".
I stumbled upon this website today, and I really liked the tag line "Dedicated to the art and science of moving goods to market". That is what Supply Chain Management is all about.
Check it out if you've got a moment. It has all kinds of articles on SCM.
Friday, February 22, 2008
As noted in other medical journals, there is a shift of responsibility of in-hospital patients from his/her primary care physician (PCP) to the hospitalists.
- It allows the primary care physicians to see more patients in their offices
- The hospitalists are more well equipped in dealing with important functions such as transitioning patients between healthcare settings. Such transfer requires coordinating tests, lab work, and medicines and conferring with other doctors, specialists, social workers, and case managers.
At the moment, Wikipedia noted that this type of medical practice has so far not extended beyond the US. Although, articles suggest that Canada is also starting to have more and more of them. It's noted that more and more community hospitals and major academic centers employ hospitalists. It could be due to the fact that the lack of family physicians have created a lot of 'orphaned' patients who are arriving at the hospitals without a PCP. On those occasions, the hospitalists have to act as their most responsible physician.
Thursday, February 21, 2008
Here is something that most people can relay to: Starbucks.
Being one of the largest and most well known brands in the world in the food and beverage industry, Starbucks relies heavily on operations research as well. Here is quoting from an article I found online:
Retiring executive vice president Ted Garcia joined Starbucks in 1995 as senior vice president, Logistics & Manufacturing. Through his prior experience developing manufacturing and logistics networks, as well as information systems and technology platforms, he was able to establish programs for Starbucks that have delivered savings totaling more than $250 million. These savings were achieved through consolidated global purchasing leverage, conversion cost reductions, reduced logistics costs and lower inventory levels. Additionally, Garcia was responsible for the development of three new roasting plants in York, Pa., Carson Valley, Nev. and Amsterdam, The Netherlands. Under Garcia's leadership, Starbucks became known as an established leader in supply chain activities.
Here is a little profile of Dorothy Kim - Starbucks Executive Vice President of Supply Chain Operations:
She is responsible for the day-to-day activities of SCCO, which include the global strategic business management of manufacturing, engineering, purchasing, distribution, planning and transportation; supply chain integrated systems; inventory management and worldwide sourcing of coffee. She will report to Jim Donald, ceo designate.
Kim joined Starbucks in 1995, where she gained four years of retail planning and operations experience prior to transitioning to SCCO. She held the positions of vice president, SCCO Logistics, and vice president, SCCO Finance & Systems, before her promotion to senior vice president, Global Logistics & Procurement. Kim was instrumental in leading the development of Starbucks SCCO Systems Vision and Master Project Plan.
Info from http://findarticles.com/p/articles/mi_m0EIN/is_2004_Dec_14/ai_n8570893
Friday, February 15, 2008
Professor Martin Puterman's research is featured in Globe and Mail on Feb 7, 2008;
What's the most valid predictor of a golfer's performance?Read more...
Professor Martin Puterman and his research assistant, Stefan Wittman, at the University of British Columbia's Sauder School of Business recently completed a study that helps answer these questions. Meanwhile, U.S. Ryder Cup captain Paul Azinger made some decisions that bear on this discussion.
Thursday, February 14, 2008
Are you an Operations Research practitioner?
OR Specialist, OR Consultant, OR Analyst... all the OR People, list yourself on the OR directory!
Tuesday, February 12, 2008
- Healthcare: in Vancouver, BC, we have the Fraser Health Authority and the Vancouver Coastal Health Authority; in Victoria, BC, there is the Vancouver Island Health Authority. They use OR to aid decision making in personnel scheduling, facility planning, resource management in general, and process design
- Manufacturing: auto, aviation...
- Finance: OR used for forecasting, portfolio management, personal consumption management
- Service & Retail: restaurants, franchise, retail stores, etc. use OR for pricing, inventory management, process design
- Military: of course this is the birthplace of OR
- Shipping & Transportation
- Waste Management: routing systems
Sunday, February 10, 2008
an interdisciplinary branch of applied mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima (cost loss, lowering of risk, etc) of some objective function. The eventual intention behind using operations research is to elicit a best possible solution to a problem mathematically, which improves or optimizes the performance of the system.
Another definition of OR, given at one of the Plenaries at the last INFORMS Meeting (Seattle 2007).
A path was defined to unify Industrial Engineering, Operations Reaearch, Operations Management, etc as "Operations Engineering". The preferred was "Operations Science and Engineering" but I like the idea of having a distinct name for what is research from what is practice and application.
In a nutshell, operations research (O.R.) is the discipline of applying advanced analytical methods to help make better decisions.To define our profession is one of the most difficult things - so I've found.
By using techniques such as mathematical modeling to analyze complex situations, operations research gives executives the power to make more effective decisions and build more productive systems based on:
- More complete data
- Consideration of all available options
- Careful predictions of outcomes and estimates of risk
- The latest decision tools and techniques
What is your definition? How do you think we should be "marketed"? Post your comment.
ThinkOR welcomes all operations research professionals, academias, and any interested parties to come together and discuss and post your views, opinions, and ideas on operations research.
It is the gathering of the minds to talk about the methods of OR, applications, new ideas, etc. It serves as an exchange corner for the OR professionals, and an information source for the general public.
At this moment in time, there is very little information on OR for the general public to even know about what OR is. We hope that with this little space on the web, ThinkOR.org will be able to provide this type of information from a very personal view through the participation of all interested parties.