Showing posts with label Operations Research in Healthcare. Show all posts
Showing posts with label Operations Research in Healthcare. Show all posts

Monday, August 12, 2013

Value chain trumps good design - ColaLife

Babies in Africa suffer and die from diarrhoea, but it's easily treatable with medicines that costs pennies. The problem is getting the medicine into the mothers hands - a supply chain problem in a rural and sparsely populated area.

Here comes ColaLife: Turning profits into healthy babies.

Inventing medicine packaging to fit into coca cola bottle gaps is ingenious, but understanding the value chain, so that all hands that touch the supply chain of the medicine has an incentive to ensure its stock and flow, is even more important.

If there is only one message to take away, I would choose:
"What's in it for me?" 
Always ask this to make sure there is a hard incentive for all players to participate. Free give-aways are often not valued, resulting in poorly managed resources and relatively low success rate. Ample training and advertising for awareness and effective usage is also key for product / technology adoption.

Monday, February 1, 2010

Healthcare system improvement project management: making a big team work

It's tough chairing meetings, tougher chairing a big meeting (10-15 people), and tougher yet chairing a big meeting that's supposed to last an 8-hour day, one day a week for 6 months. A lot of planning goes into making such a day work with team members varying from the analytical kind to the "feeling" kind, from the surgical kind to the managerial kind. I'm slowly to get a hang of it having done it for a couple months now. The following is a lot of common sense, but if one doesn't have the chance to go through this kind of work with big teams, one may not think it so obvious as an approach. Thought I'd share for whatever it's worth.

  • Make sure everyone is doing something - feeling of usefulness in the group, or else people will feel disengaged.


  • Assuming natural progress of project is from problem discovery, to analysis, to design and implement, and assuming that everyone in a team needs to participate in all phases, then keep telling self that as soon as we get through to design, things will become more exciting. Analysis phase is not everyone's cup of tea, even though geeks like me find it most interesting.


  • Spend the time and create a big poster out of rolling parchment paper. It becomes a live document of all work done on the project to remind team in every meeting of key aims and work accomplished so far. It is a pat on the shoulder for work well done, as well as always showing the direction for the team. Sometimes, one can't see the forest for the trees.


  • Big team, big scope - recipe for getting lost or losing sight easily; remind team of aims frequently; relate how current tasks contribute to the aims.


  • Identify one lead for each main task to be done in the implementation phase. Give team members enough time to develop own plans on how to implement, and write the document themselves to instill ownership from the start (do not use admin resources to do this). Sometimes it takes 2-3 days just to write and re-write the implementation plans, but the time is worth while, not because we need to have a perfect plan as that is unrealistic, but because it forces people to think of all nitty gritties of how get things done and how they would get around specific change management problems. Provide a good example from a colleague of theirs (real examples from real people = trust), but encourage and give them room to be creative. Then everyone on the team should peer review each other's plan with specific review criteria.


  • Once you have all of the above done, engagement level should be pretty high by now, as a healthy amount of sweat and tears will have gone into the implementation plans. I bet anything that you won't be able to hold people back on actioning out those implementation plans.

There you have a much happier and motivated team. There is no sure recipe. This isn't one by any means, but it is working for me so far.

Friday, January 29, 2010

CORU Clinical Operational Research Unit - London health care OR team

CORU - Clinical Operational Research Unit, based in UCL (University College of London), is a London health care OR team - the first I've come across working in OR specialising in health care, since I moved across the pond last year from Canada.


Needless to say, I was very happy to meet up with Martin Utley, Director of CORU, last week. Thanks for a great chat, Martin. I'm genuinely excited to link up with the CORU group, as I have not yet met any OR bodies in health in UK yet. Reading up on some publications that Martin sent over - I do miss the academic side of Operational Research.


It was said that OR used to exist quite healthily in UK's health sector before (very close to the Canadian system). After some reform / re-org within the National Health Services (NHS), most of the OR groups within the NHS disappeared (more or less). What a pity.

Friday, November 6, 2009

Healthcare system improvement project management: how not to manage projects


Lately, I am finding it difficult to not do the work myself in the projects I'm leading/managing. The excuse I've been using is "well, it's just easier to do it myself than asking someone else for it". However, I end up paying for it with way too many late nights working around the clock. I'll be the first to admit: this is the wrong way to manage projects. I end up feeling burned out and tired doing work that should have been done by others in the team, leaving me without enough energy or time to actually 'manage' the projects. Ultimately, if I continued this way, it would be both bad for me and the projects.


However, I used to lead projects like this before, and it worked charmingly. What changed?

Well...

Here I talked about the Master of Management in Operations Research program that trained me as an OR professional (great program by the way). During this master's program, each student is a project lead on a 4-6 months project with a real company doing real projects. The students are fully capable of carrying out all tasks within the project, but have data analysts to help out, because there is just too much analysis work for one person usually. A project lead in this scenario is both the leader and largely the doer - what I'm used to do at work both before and after the master's program.

Why isn't it working now? In my humble opinion, leading 2 projects with relatively large project teams is quite a busy job. One simply doesn't have the time to both lead and do. I did, so I paid for it. Then I learn. I guess in this case, it would probably be overall easier to ask someone else to do it than doing it myself.

Got any tips to share with me? Comment here or email me at dawen [at] thinkor [dot] org

Sunday, October 25, 2009

Healthcare system improvement project management: what's the right balance?

I now live in London, UK, and work for a rather famous hospital, renowned for its medical reputation internationally. My role is a project manager on 2 system improvement projects. Such projects are also labeled as "transformation" or "modernisation" projects, depending on where you work. The idea is to work with doctors, nurses, managers, clerical and administrative staff, as well as patient families, who live and breathe the hospital, so that this group of people take ownership of the problems and solutions. We meet one day a week for a full day, and project managers like me and lean improvement facilitators are thrown into the mix to try to help the projects move along. The key is all about implementation, which may make some external management consultants jealous, since they almost never get to implementation. It is a luxury as one can see one's work flourish.

Great idea, isn't it?

Is the team too big?
  • 20% of 8-12 people's time is huge! On paper, the staff are 'back filled' for that 20% of work, but in reality, finding the right people with the right skill mix to do 1 day's work is quite difficult. Therefore, these people often end up working 120%. Commitment to the team starts high but then lacks off a bit.
  • With the amount of time invested, people outside the group have very high expectations. They want to see things getting churned out from the team quickly, and often ask "when are you going to deliver what". When in reality, such projects have a research nature to them. There may be the best of project plans, but research will always take as long as it does until you can move onto solutions.
  • Keeping 8-12 people 'entertained' and interested in the same topic is challenging. Some people are very detailed. Some want to talk big concepts. Some just want to start getting into the issues and start tearing it apart. Keeping everybody happy is never easy.
  • Big groups also suffer from democracy. It takes time for everybody to have their say, and one person can dominate the whole discussion and shut others up. The good facilitators will still find this difficult.
But is the team too big? I've definitely experienced the same group, but with fewer people, and we were very productive for the small group days. True, everybody in the team should be there because of their functions within the hospital, but perhaps they don't need to all be there every week.

Ideas on how to tackle the big group:
  • We are now trying to break the team into smaller groups to be efficient, and to break the group dynamic. Each sub group also has a sub lead, so more people can feel true ownership within the team. We then reconvene after half a day to update each other on progress. It seems to be working so far.
  • Send team members out to the hospital to observe, collect information, shadow someone else, and then update. It breaks the 'classroom' feeling when in a meeting room.
  • Of course there are many facilitative ways to deal with it as well when they are all doing this: :)

I find these projects are shaping like way more people talking than actually doing the work. It is especially frustrating for the ones who actually joined up for doing the work. I've definitely done successful projects in the past that didn't involve such an elaborate set up. This way of working should make implementation easier. I am waiting and seeing.

Sunday, September 13, 2009

Introducing variability, flow and processes in a funny video to anyone

I'm leading on two variability & flow management projects at the hospital right now, and the terms "variability" and "flow" are certainly not something the medics hear much about. I needed a quick way of explaining what the projects are about, what these terms mean, and what kind of problems we are trying to resolve. A colleague suggested this video from the ever popular "I Love Lucy" TV series, episode "Chocolate Factory". It does a wonderful job of making people laugh, as well as acting out some strong parallels to a process, and the variability and flow within the process. Take a look at the video (it's a funny one!) and read on for the parallels to the operation of a hospital. The doctors, nurses and patients on my team all found the video not only hilarious but also made it clear to them what we are trying to do in the variability & flow management project.



The parallels:
  • Process: the chocolates can be patients coming into the hospital 'conveyor belt'. Lucy and her friend Ethel can be the nurses, for example, (or the various clerks, doctors, pharmacists, radiographers, etc.) handling the patients, 'dressing' them up or giving them care to make them better so they can go on to the next hospital professionals, i.e. the pharmacists to receive medications in the next room down the conveyor belt. The patient traveling through the conveyor belt is a process. Similarly, Lucy and Ethel picking up the chocolate from the conveyor belt, taking the wrapping paper, wrapping up the chocolate nicely, placing the wrapped chocolate back onto the conveyor belt, and returning to the position to be ready for the next chocolate, is a process. Lucy and Ethel are the 'servers' within the process. The things they do to the chocolate are 'steps' within the process. The girls feeding the chocolate onto the conveyor belt for Lucy & Ethel in the previous room are the servers of the upstream process to Lucy & Ethel's wrapping process. Similarly, the girls boxing the chocolates in the next room, perhaps, are the servers of the downstream process.
  • Flow: The chocolates going through the Lucy & Ethel's wrapping process is a flow.
  • Variability: The speed the chocolates are placed onto the conveyor belt is a source of variability, because the speed changes, and so is the speed that Lucy & Ethel wraps the chocolate, as they have very different styles of wrapping. This results in the variable speed of the wrapped chocolates flowing out of the Lucy & Ethel wrapping process.
  • Queuing & waits - When Lucy & Ethel were running behind and when they started to collect the chocolates in front of them and in their hats, so that they can wrap them later, that's queuing the chocolates, and those chocolates are experiencing 'waits'.
  • Mis-communication: When the supervisor meanie lady shouted to the upstream girls to "let it roll" and nothing happened so she had to go to the previous room to sort it out, that's mis-communication or signal failure. :)
The video also shows some classic examples of problems around processes:
  • Isolated processes and working in silos – what is going on 'upstream' and 'downstream' is absolutely unknown to Lucy & Ethel.
  • Lack of issue escalation procedure - when the chocolates are coming too fast for Lucy & Ethel to handle, they had no way of letting the upstream or the manager know (but of course, the meanie supervisor lady didn't allow them to leave one chocolate behind).
  • Performance management - the meanie supervisor lady did not have realistic expectations on Lucy & Ethel's performance, or maybe she simply didn't have any clue about the variability of the sometimes very high demand placed on Lucy & Ethel from the upstream.
  • Reactionary management - When the supervisor lady came into the room and saw that Lucy & Ethel had no chocolates on the belt and therefore ordered the upstream to feed faster is very reactionary. She simply made the decision based on one observation / data point, and did not ask any questions about why it is that way.
Hope you find the video useful in your work as well. I'm sure you can draw parallels to other industries aside from health care. Please feel free to share it with me. Things are often best explained by humour.

Tuesday, October 28, 2008

Approaches to Reporting Access to Diagnostic Imaging in Health Care

In the publicly funded and administrated health care system in Canada the absence of market forces makes access to services of chief concern. Thus reporting, synthesizing and acting on data regarding access is critical. In the context of diagnostic imaging, an area that I have recently had experience with, access is typically talked about in terms of waiting times or waiting lists. The issue of waiting times in imaging is, like so much in health care, a complex one. Multiple exam types requiring varying specialty resources are performed on patients with a kaleidoscope of urgency levels. Typically data exists at a patient-by-patient level, but the challenge is how to aggregate the information in such a way that waiting times can be reported for both the benefit of the decision maker and the benefit of the public. The details oriented operations research practitioner is tempted to over-deliver on their level of analysis when presenting these metrics and we must seek to trim it back while still including critical information that impacts what are ultimately life and death decisions. Below I hope to combine a survey of the current state of public information on CT (Computed Tomography) and MRI (Magnetic Resonance Imaging) waiting times in Canada with a discussion of nuances of the metrics chosen.

Beginning with the worst example I saw in my research, we look at the Nova Scotia Department of Health Website. Waiting times are reported by authority and by facility, important data for individuals seeking to balance transportation with access. However, it's how the wait times are measured that worries me the most. Waiting time is defined as the number of calendar days from the day the request arrives to the next available day with three open appointments. I have found that this is the traditional manner in which department supervisors like to define waiting lists, but at a management level it's embarrassingly simplistic. At the time of writing, the wait time at Dartmouth General Hospital for a CT scan is 86 days. I guarantee you that not every patient is waiting 86 days for an appointment. Not even close. Neither is the average 86 days, nor is the median 86 days. The question of urgency requires that we discuss our level of access for varying urgencies. Additionally, there's the fact that 3 available appointments 86 days from now says nothing about what day my schedule and the hospital's schedule will allow for an appointment. If there's that much wrong with this measurement method, then why do they do it? The simple fact is that it is very easy to implement. In healthcare where good data can be oh so lacking, this system of measuring "waiting lists" is cheap and easy to implement. No patient data is required, one needs simply to call up the area supervisor or a booking clerk and ask for the information. So hats off to Nova Scotia for doing something rather than nothing, which indeed is better than some of the provinces, but there's much work to be done.

Next, we'll look at the Manitoba Health Wait Time Information website. Again we have data reported by health authority and facility. Here we see the "Estimated Maximum Wait Time" as measured in weeks. The site says, "Diagnostic wait times are reported as estimated maximum wait times rather than averages or medians. In most cases patients typically wait much less than the reported wait time; very few patients may wait longer." If this is true, and it is, then this is pretty useless information, isn't it? Indeed I am reconsidering my accusation of Nova Scotia being the worst of the lot. If this information represents something like the 90th or 95th percentile then I apologize because, as I discuss later, this is a decent figure to report. However, it is not explicitly described as such.

Heading west to Alberta, we visit the Alberta Waitlist Registry. Here we can essentially see the waiting time distribution of most patients scanned in MRI or CT accross the province in the last 90 days. The site reports the "median" (50th) and "majority" (90th) percentiles of waiting time. It then follows to report the % of patients served in <3>18 months. What is lacking in this data is two key elements. For one, both day patients and in patients are included in this data. This means that both the patient waiting for months to get an MRI on their knee and the patient waiting for hours to get one on their head are treated as equal. Patients admitted to the hospital and outpatients experience waiting times on time scales of different orders of magintude and should not be considered together. The percentage of patients seen in less than 3 weeks must therefore include many inpatients and thus overstates their true level of service. The other key element is the notion of priority. Once again, for an individual in the population looking for information about how long they might wait or for a manager/politician looking to quantify what the level-of-care consequences are of current access levels, this data isn't very useful because it lacks priority. If urgent patients are being served at the median waiting time, this shows significant problems in the system, but without data reported by urgency, we can only guess that this is being done well. As someone who has seen it from the inside, I would NOT be confident that it is.

Now I return to what westerners would rather not admit is the heart of Canada, Ontario and the Ontario Ministry of Health and Long-Term Care website. This site measures wait times in terms of the time between request and completion. It reports the 90th percentile wait times in days by facility and provincially and calls it the "point at which 9 out of 10 patients have had their exam." The data excludes inpatients and urgent outpatients scanned the same-day, addressing a critical issue I had with the Alberta data. Priorities are lacking, but with a little digging you can find the province's targets by priority, so there is, perhaps, hope. Reporting the 90th percentile seems like a good practice to me. With the funky distributions we seen when measuring waiting times, averages are certainly of no use. Additionally the median isn't of great interest because this is not an indication what any one individual's experience will be. This leaves the 90th percentile which expresses what might be called a "reasonable worst case scenario".

Finally I turn to the organization whose explicit business is communicating complex issues with the public, the Canadian Broadcasting Corporation. Their CBC News Interactive Map from November 2006 assigned letter grades from A-F converted from %ages of the population that were treated within benchmark. Who knows if this is glossing over the lack of priority data or if it includes the %age that met the benchmark for each priority, but it's a start. Letter grades given were: BC N/A, AB D, SK N/A, MN F, ON C, QC N/A, NB N/A, NS N/A, PEI F, NF N/A. So with over half not reporting, there wasn't much they could do.

So what have we learned from this survey? Well we have certainly learned that the writer has a love of detail and is dissatisfied with each province that omits any. This is, as discusses in the introduction, natural for an operations research practitioner. If I were advising someone on the development of diagnostic access-to-care metrics I would tell them this: (1) Focus on the patient experience. Averages and medians don't tell me what my experience might be. 90th percentiles do a much better job of this. (2) Focus on the context. Waiting times in the context of an inpatient are in a different universe than those of an outpatient and should be treated as such. Waiting times of urgent cases vs. routine cases bear different significance and should be similarly separated.

Friday, February 22, 2008

OR in Hospitals: Hospitalist

Hospitalists are Doctors, Physician Assistants or Nurse Practitioners whose primary professional focus is hospital medicine.

As noted in other medical journals, there is a shift of responsibility of in-hospital patients from his/her primary care physician (PCP) to the hospitalists.
  • It allows the primary care physicians to see more patients in their offices
  • The hospitalists are more well equipped in dealing with important functions such as transitioning patients between healthcare settings. Such transfer requires coordinating tests, lab work, and medicines and conferring with other doctors, specialists, social workers, and case managers.
What does that spell to you? Operations Management! 

Specialization of workers to increase efficiency and to meet high demand.

At the moment, Wikipedia noted that this type of medical practice has so far not extended beyond the US. Although, articles suggest that Canada is also starting to have more and more of them. It's noted that more and more community hospitals and major academic centers employ hospitalists. It could be due to the fact that the lack of family physicians have created a lot of 'orphaned' patients who are arriving at the hospitals without a PCP. On those occasions, the hospitalists have to act as their most responsible physician.