In the publicly funded and administrated health care system in Canada the absence of market forces makes access to services of chief concern. Thus reporting, synthesizing and acting on data regarding access is critical. In the context of diagnostic imaging, an area that I have recently had experience with, access is typically talked about in terms of waiting times or waiting lists. The issue of waiting times in imaging is, like so much in health care, a complex one. Multiple exam types requiring varying specialty resources are performed on patients with a kaleidoscope of urgency levels. Typically data exists at a patient-by-patient level, but the challenge is how to aggregate the information in such a way that waiting times can be reported for both the benefit of the decision maker and the benefit of the public. The details oriented operations research practitioner is tempted to over-deliver on their level of analysis when presenting these metrics and we must seek to trim it back while still including critical information that impacts what are ultimately life and death decisions. Below I hope to combine a survey of the current state of public information on CT (Computed Tomography) and MRI (Magnetic Resonance Imaging) waiting times in Canada with a discussion of nuances of the metrics chosen.
Beginning with the worst example I saw in my research, we look at the Nova Scotia Department of Health Website. Waiting times are reported by authority and by facility, important data for individuals seeking to balance transportation with access. However, it's how the wait times are measured that worries me the most. Waiting time is defined as the number of calendar days from the day the request arrives to the next available day with three open appointments. I have found that this is the traditional manner in which department supervisors like to define waiting lists, but at a management level it's embarrassingly simplistic. At the time of writing, the wait time at Dartmouth General Hospital for a CT scan is 86 days. I guarantee you that not every patient is waiting 86 days for an appointment. Not even close. Neither is the average 86 days, nor is the median 86 days. The question of urgency requires that we discuss our level of access for varying urgencies. Additionally, there's the fact that 3 available appointments 86 days from now says nothing about what day my schedule and the hospital's schedule will allow for an appointment. If there's that much wrong with this measurement method, then why do they do it? The simple fact is that it is very easy to implement. In healthcare where good data can be oh so lacking, this system of measuring "waiting lists" is cheap and easy to implement. No patient data is required, one needs simply to call up the area supervisor or a booking clerk and ask for the information. So hats off to Nova Scotia for doing something rather than nothing, which indeed is better than some of the provinces, but there's much work to be done.
Next, we'll look at the Manitoba Health Wait Time Information website. Again we have data reported by health authority and facility. Here we see the "Estimated Maximum Wait Time" as measured in weeks. The site says, "Diagnostic wait times are reported as estimated maximum wait times rather than averages or medians. In most cases patients typically wait much less than the reported wait time; very few patients may wait longer." If this is true, and it is, then this is pretty useless information, isn't it? Indeed I am reconsidering my accusation of Nova Scotia being the worst of the lot. If this information represents something like the 90th or 95th percentile then I apologize because, as I discuss later, this is a decent figure to report. However, it is not explicitly described as such.
Heading west to Alberta, we visit the Alberta Waitlist Registry. Here we can essentially see the waiting time distribution of most patients scanned in MRI or CT accross the province in the last 90 days. The site reports the "median" (50th) and "majority" (90th) percentiles of waiting time. It then follows to report the % of patients served in <3>18 months. What is lacking in this data is two key elements. For one, both day patients and in patients are included in this data. This means that both the patient waiting for months to get an MRI on their knee and the patient waiting for hours to get one on their head are treated as equal. Patients admitted to the hospital and outpatients experience waiting times on time scales of different orders of magintude and should not be considered together. The percentage of patients seen in less than 3 weeks must therefore include many inpatients and thus overstates their true level of service. The other key element is the notion of priority. Once again, for an individual in the population looking for information about how long they might wait or for a manager/politician looking to quantify what the level-of-care consequences are of current access levels, this data isn't very useful because it lacks priority. If urgent patients are being served at the median waiting time, this shows significant problems in the system, but without data reported by urgency, we can only guess that this is being done well. As someone who has seen it from the inside, I would NOT be confident that it is.
Now I return to what westerners would rather not admit is the heart of Canada, Ontario and the Ontario Ministry of Health and Long-Term Care website. This site measures wait times in terms of the time between request and completion. It reports the 90th percentile wait times in days by facility and provincially and calls it the "point at which 9 out of 10 patients have had their exam." The data excludes inpatients and urgent outpatients scanned the same-day, addressing a critical issue I had with the Alberta data. Priorities are lacking, but with a little digging you can find the province's targets by priority, so there is, perhaps, hope. Reporting the 90th percentile seems like a good practice to me. With the funky distributions we seen when measuring waiting times, averages are certainly of no use. Additionally the median isn't of great interest because this is not an indication what any one individual's experience will be. This leaves the 90th percentile which expresses what might be called a "reasonable worst case scenario".
Finally I turn to the organization whose explicit business is communicating complex issues with the public, the Canadian Broadcasting Corporation. Their CBC News Interactive Map from November 2006 assigned letter grades from A-F converted from %ages of the population that were treated within benchmark. Who knows if this is glossing over the lack of priority data or if it includes the %age that met the benchmark for each priority, but it's a start. Letter grades given were: BC N/A, AB D, SK N/A, MN F, ON C, QC N/A, NB N/A, NS N/A, PEI F, NF N/A. So with over half not reporting, there wasn't much they could do.
So what have we learned from this survey? Well we have certainly learned that the writer has a love of detail and is dissatisfied with each province that omits any. This is, as discusses in the introduction, natural for an operations research practitioner. If I were advising someone on the development of diagnostic access-to-care metrics I would tell them this: (1) Focus on the patient experience. Averages and medians don't tell me what my experience might be. 90th percentiles do a much better job of this. (2) Focus on the context. Waiting times in the context of an inpatient are in a different universe than those of an outpatient and should be treated as such. Waiting times of urgent cases vs. routine cases bear different significance and should be similarly separated.
European night train map
-
The Night Train Map is for Europeans who want to travel at night:…
*Tags:* Europe, train, travel
2 hours ago