July 2, 2016

The Limits of Randomized Controlled Trials: Who Agrees to be Recruited?

Posted in research tagged , , , at 10:30 am by mariawolters

Randomised controlled trials (RCTs) of interventions  are a gold standard source of evidence in medicine, as people like Ben Goldacre have argued repeatedly.  As people are allocated to receiving the intervention at random, this should eliminate many of the biases that come from people self-selecting for interventions they would like.

But RCTs are vulnerable to another sort of bias – that of deciding whether to take part in the trial at all. The study I am discussing here, by Rogers and collaborators, takes a very thorough look at why older people decided to take part in a trial that tested an intervention which is designed to get older people to walk more.

Study participants were recruited from three general practices in relatively affluent parts of England, Oxfordshire and Berkshire. Potential participants were identified from the general practitioners’ records. General practitioners then filtered out those whom they deemed unsuitable for the intervention, and then the invitations to the remaining people were sent out through the practices.

This means that while the researchers did not see the names and addresses of the non-participants, they still had access to some basic demographic information which allowed them to compare who did and did not show interest in the trial. This information included age, gender, whether they were invited on their own or as a couple, and the socioeconomic deprivation index of the area they lived in – but not the area itself.

988 people were contacted initially. Everyone had three options: to take part in the trial, to complete a survey with information about why they chose not to take part, and not to respond at all. 298 (30.2% or one in three) people agreed to participate, and 690 were not interested. Of those 690, 183 (26.5% or one in four) returned the survey, and 77 of the 183 (42%) agreed to be contacted further about their reasons for not participating. Rogers then interviewed 15 of these people herself; the interviews stopped after 15 because no new insights emerged.

Instead of discussing the complex pattern of results that emerged from the study, I would like to highlight two findings that I consider to be the most interesting.

Finding 1: The people who don’t respond at all are very different from the people who will return your non-participation survey.

Table 1 of the paper shows the overall demographic differences between participants and non-participants, while Table 2 looks at the demographic differences between participants and non-participants that returned the survey. The pattern that emerges from Table 1 is that people are less likely to take part if they are male and if they live in a deprived area. Age and whether they were invited as a couple or not did no matter. Table 2, on the other hand, shows no difference at all on any of these four metrics.

Finding 2: Taking part in a trial is hard work for participants. 

While the most common reason people cited for not taking part was that they were already physically active (67.3% of the 183 who returned the non-participation survey), the second most important reason was that they just didn’t have the time (44%).

The qualitative interviews provide an insight into the demands that taking part in the trial would place on participants. They would have to

  • find time in lives that were already full of family commitments and activities
  • stay with the trial for three months
  • walk regularly in the dark winter
  • look after an accelerometer device to measure physical activity
  • walk regardless of other health issues such as chronic pain, depression, or knee problems
  • change their existing habits and routines

Conclusion

The people who ended up taking part in the trial were not only more wealthy and more likely to be female, but also more likely to be able to organize their lives around increased physical activity.

What does that mean for clinical practice? While it appears to be very easy to tell people to just be more active, the recruitment patterns for this trial indicate that those who might need help the most don’t necessarily contribute to the evidence base that doctors are told to rely on.

 

Rogers, A., Harris, T., Victor, C., Woodcock, A., Limb, E., Kerry, S., … Cook, D. G. (2014). Which older people decline participation in a primary care trial of physical activity and why: insights from a mixed methods approach. BMC Geriatrics, 14(1), 46. http://doi.org/10.1186/1471-2318-14-46

Advertisements

How Can We Design Assistive Technology that Helps People with Early-Onset Dementia?

Posted in research at 9:18 am by mariawolters

Living with dementia can be hard for the person with dementia and for the people who care for them. Good support can make life a lot easier, and create space for moments of contentment, joy, and happiness.

In the past decades, assistive technology developers have sought to provide part of this support through specialized technology. Just as a prosthetic leg can help amputees walk, developers have created prosthetic memory solutions that fill the gaps in a person’s own memory.

However, finding the package of solutions and services that is best for the person with dementia and their carers can be very difficult. Often, these services are put together by occupational therapists.

In the paper I am discussing here, three occupational therapists, academics and practitioners from the North of Norway, Cathrine Arntzen, Torhild Holte, and Rita Jentoft, reflect on the process of creating a technology solution that works. They focus on people who are diagnosed with dementia relatively early in their lives, in their fifties and sixties. These are baby boomers who tend to be far more familiar with technology than people in their eighties and nineties, who are more likely to be diagnosed with dementia.

To give you an idea of the complexity of these solutions, the assistive technology introduced, which is documented in Table 1 of the paper, ranged from a simple digital calendars to a package of four devices, and included specialized solutions such as a reminder connected to the timer of the coffee machine and non-technological solutions such as whiteboards.

Methodology

For their study, they used a longitudinal, qualitative design. This means  that they followed 12 people and their carers for a year after the first home visit, when their needs were assessed. They talked to their participants every three months, reviewing issues that had come up at past interviews, and exploring new issues that had arisen. They also took notes of their own observations.

Qualitative data such as interviews are notoriously difficult to analyse. Each analyst approaches the text with their own preconceptions and ideas. Therefore, texts are often analysed by two to three people over several iterations, comparing and contrasting their findings, to ensure that their interpretation is grounded in what the participants told them.

As a result of analysis, patterns and themes emerge as well as individual experiences that highlight wider issues. There are many ways to ensure that these findings can be useful in different contexts. In this study, Arntzen, Holthe, and Jentoft interpreted their findings in light of a particular theory of lived experience, phenomenology.

Findings

Arntzen, Holte, and Jentoft identified five elements of a successful assistive technology. As I list them, I will comment on each from my own experience of working in this field.

1 The technology has to address an actual need, which can be practical, emotional, or about the way people relate to each other.

Comment: This means that careful initial assessment is important, and default packages are likely to fail.

2 The technology has to fit in with people’s established habits and problem solving strategies, because they reflect how a person thinks about and relates to the world.

Comment: If a piece of assistive technology is introduced because it provides useful data to central services, even though it would require people with dementia and their carers to rethink the way they organize their lives, it is highly likely to fail.

3 The technology needs to be reliable and trustworthy, and people need to feel good about it.

Comment: This means that assistive technology needs to be well designed and tested, requiring high quality software engineering, and supported by qualified engineers who can intervene quickly in case of malfunctions.

4 The technology needs to be user friendly, adaptable, and easy to manage.

Comment: Ideally, all technology would be like that, but all too often, technology is designed primarily to provide data and to enforce standard procedures, and to enforce a predictable life with no room for spontaneity. This has been a problem for a long time, and it is a typical of the conflicts between stakeholders (people with dementia who want to live their own lives as they used to; carers; social care; health care; policy)

5 The technology needs to interest and engage the family carer. It is likely that the family carer will be the one to look after the technology, keep it up to date, make sure it works, and use the more complex functionalities. If they like it, and if it engages them, it is more likely to be used. A case in point was the digital calendar, which proved very popular with the carers, less so with the people with dementia.

Comment: Family carers are sometimes overlooked in the work on supporting people with dementia to live in the community, as there is a strong emphasis on helping those who do not have family live independently – those are the people who require more social care time. Carers are also often assumed to be children, and spouses are assumed to be technophobic, as they are older. However, all of the people with dementia in this study were cared for by their spouse, who was in a similar age group. This means that we need to make sure technology for people with dementia is also accessible to older people without cognitive impairment.

Conclusion

While much  of what Arnzten, Holthe, and Jentoft found in their paper will not be new to people who work in the field, I still think that their paper is a salutary reminder of just how important adaptable, flexible technology solutions are. Fixed packages of standard technology may be easier to maintain and to prescribe, but will they pay for themselves in actual daily use? If Arntzen, Holthe, and Jantoft are right, then this is highly unlikely.

Arntzen, C., Holthe, T., & Jentoft, R. (2016). Tracing the successful incorporation of assistive technology into everyday life for younger people with dementia and family carers. Dementia, 15(4), 646–662. http://doi.org/10.1177/1471301214532263

November 27, 2015

Designing Reminders that Older People Can Remember #BAAConf

Posted in research tagged , , , , at 12:22 pm by mariawolters

Reminders only work if you can hear them – as I found out to my cost this morning. I had been looking forward to a scrumptious Yorkshire breakfast, served from 7am to 10am, only to wake up at 10.17am.

Why did I sleep through my trusty phone alarm? Because my phone hadn’t been charging; I had forgotten to switch on the socket into which I had plugged it. (In the UK, we need to switch on sockets before they will provide electricity).

Now imagine that you can no longer hear the alarms you set not because you failed to charge your phone, but  because your hearing is going. What do you do?

This is what my talk at this year’s Conference of the British Academy of Audiology is all about. (The slides are on Slideshare, as always.)

I discuss a few strategies that I have discovered when working with older people as part of my research into human-computer interaction.

All of these ideas are inspired by what older people have told me and my colleagues, or by what we have seen them do.  This is perhaps the most important point of my talk. People are experts in what works for them. Very often, all it takes is a bit of active listening to uncover a solution that builds on their existing habits, their routines, and the layout of the spaces and places where they live.

This is really the most important trick – make the action to be remembered as natural and habitual as possible.

Once you have ensured that, the rest is icing on the cake:

  • ensure that people choose reminders that they actually choose to hear. (That includes reminders which are so irritating that you just have to get out of bed to silence them.)
  • ensure that people can understand what the reminder is all about. Again, you can take advantage of associations people already have. For example, people may choose a snippet from their favorite love song to remind them to take their heart medications
  • ensure that the reminders are not stigmatizing. It can be hard to admit that one’s memory is going, that one is no longer coping. Having one’s style cramped is even harder.

If you would like personalized advice or talk further, please do not hesitate to contact me via email (maria dot wolters at ed dot ac dot uk) or on Twitter (@mariawolters).

I also provide tailored consulting and training packages  at ehealth-tech-doctor.com.

 

November 21, 2015

People Assess How Technology Can Help Them Remember #psynom15

Posted in research tagged at 11:55 pm by mariawolters

Just as writing was thought to be the death of memory back before the Common Era, when Real Poets memorized their work, technology is now deemed to be the death of memory, because people can have information at their fingertips and don’t need to remember it anymore.

But actually, people appear to use this new ability strategically and judiciously, based on their assessment of their own memory (or metamemory, as it’s called in the psychological literature).

In this post, I want to highlight two relevant papers I heard at the Annual Meeting of the Psychonomic Society in Chicago, one about remembering information (retrospective memory), and one about remembering to do something (prospective memory).

Saving some information frees capacity to remember

The retrospective memory study is by Benjamin Storm and Sam Stone from the University of California, Santa Cruz, and it was published in 2014 in Psychological Science.

When we save information in a file on a computer, we’re more likely to forget it. But this forgetting has a function – it frees resources for remembering other information. Storm and Stone asked people to type a set of words into a file, which they then saved or did not save. Next, they were asked to memorize a second set of words, and finally, they were asked to recall the first set and the second set. If they had saved the first set, they were able to study those words again before they had to recall them.

If people had been able to save the first set of words, they were much better at remembering the second set of words – less so when they hadn’t been able to save it, and had to keep both in memory.

Next, Storm and Stone repeated the study with a twist – for half the participants, saving worked every time, for the other half, it was unreliable. The people who couldn’t rely on the first set being saved started to keep  it in memory, too – so the effect of saving disappeared.

So what happened was that saving the first set of words for later study helped people use their memory more efficiently.

Whether people set reminders is determined by how they rate their own memory

Another aspect of metamemory is how confident your are in your ability to remember. In a series of two elegant studies, Sam Gilbert of University College London showed that two aspects influence whether people will set a reminder:

  1. how complex the task is that they need to remember
  2. their own confidence in their abilities (regardless of task difficulty)

People were asked to remember to do two separate tasks while performing a background task (moving numbers across a screen), one that was simple and one that was more complicated. When participants were able to set reminders (arrange the numbers to hint at what needed to be done with them), they performed well, when they were unable to do so, performance, in particular on the complex task, plummeted.

The second study involved a task that could be adjusted so that it was equally difficult for all participants. In that case, participants who had less confidence in their memory set more reminders than those who were more confident.

Metacognition matters

These studies show that memory is not automatic. People make judgements and assess tradeoffs – they harness technology (and external memory aids) to support them whenever they feel they need the support.

We need to bear this in mind when we design systems that help people remember – if they feel they don’t need these reminder systems, providing one will jar painfully with their own assessment of their abilities. Depending on how they react to such challenges to their self perception, this might lead them to be more despondent and dependent, instead of more independent.

 

 

Psychonomics 2015 #psynom15 – What an Experience!

Posted in research tagged at 12:40 am by mariawolters

At the moment, I am at the Annual Conference of the Psychonomic Society. Psychonomics is a conference that encompasses all aspects of psychology, in particular cognition and language. And to be there as a computer scientist / linguist / human factors specialist is hugely inspiring. I keep spotting research that has direct implications for the kind of work I do with older people, designing reminders, creating environments that help people thrive, writing messages that people can understand.

In the next few days, I will post a few impressions from the oral and poster sessions. I livetweeted 1.5 oral sessions, one on statistics and one on autobiographical memory, but haven’t talked about the posters yet.

What is so special about Psychonomics is that it’s not archival, so many people will use it to present more or less fully formed work that is being written up as a paper or is in the process of being published in a journal. Sometimes, it is like a technicolor advance table of contents, with lots of juicy research results to look forward to. I hope to share a few of them with you in the coming weeks.

June 1, 2015

Skype for Video Consultations – A Personal Perspective

Posted in research tagged , , , , , , , , at 10:00 am by mariawolters

I was motivated to write this short piece by looking through the material for the Remote Consulting unit of the Telehealth and Telemedicine course for the Edinburgh MSc in Global eHealth.

Helen Atherton, an active researcher in email consulting, created a fascinating set of resources on the topic for the students of that course, which I co-organise with Brian McKinstry (i.e.: Brian provides the wisdom of (sometimes bitter) experience, I implement and add my two cents from a Human Computer Interaction point of view.).

One of the topics that came up was the use of Skype for remote consultation. Skype is a good alternative to traditional phone consultations because

  • everybody can sign up for free
  • in situations where you need video, it is easy to switch on
  • it can be used by people who do not have a landline or access to a landline phone
  • it can be used anywhere with WiFi access, which means that people do not have to use or pay for call minutes

But from my own experience, there are two important issues here that make me question whether Skype is suitable for video consultations.

1) Is Skype stable?

Not really, especially not if you use the video facility. I am typically online via fast Wifi at work courtesy of eduroam (yes, University of Edinburgh eduroam works well!), and I never have any trouble uploading or downloading big papers, large data sets, or Apple system updates. But when I’m asked to take part in a Skype meeting, I will never switch on video unless the other party insists, because that is a recipe for disaster.

I haven’t systematically kept track of the number of times a multi party Skype call failed because one of the people had switched on video, and worked well once the video had been switched off, but I’d guess this has happend in about half the Skype conferences (with video) that I have been involved in.

2) Is Skype safe?

I am not going to start discussing privacy features and whether conversations can be overheard by third parties here – that’s a whole other topic which is best discussed by somebody with expertise in the area.

What I mean is safety from unsavoury contacts. While my Skype handle is gender neutral (mkwolters), I have my full name associated with it, and my name is searchable, so that collaborators who wish to add me can easily find me on Skype. I also have a portrait photo with my own face, which clearly marks me as a female.

This means that every week or so, I get a contact request from a random account pretending to be a man. Half of these use an icon that would suggest that they are a member of the US Army, and are looking to talk to somebody while on active duty. The only time I was accosted by an account that pretended to be a woman, the person was recruiting for the web cam version of phone sex, which only became clear after a longer exchange. (I like to see what’s behind those scammers. I’m nosy like that.)

A good friend of mine (male) who has locked down his own Skype profile gets so many contact requests from women that he now refuses to leave his Skype open.

On one level, this is the Skype equivalent of the good old Nigerian scam or phishing email. On another level, I can see how this might make people highly uncomfortable. (It makes me extremely uncomfortable, and I’ve been on the Internet since 1994.)

It wasn’t always like that. Before the recent wave of scammers hit, I was on Skype for years with nary an incident. But the climate has changed, and I regard Skype as fundamentally unsafe.

*** *** ***

So, if I were a health care practitioner, offering telehealth consultations to older patients at home, would I be keen to introduce Skype video consulting?

Short answer: No.

Long answer: Not unless they already have a Skype account, are comfortable with using the service, are experts at fending off unwanted online attention, and have good experiences with one to one video calls.

I would not advise or expect older people to invest in Skype just to be able to access their health care from home – just as I wouldn’t advise them to spice up their social life by chatting to that nice man who has come by their door with an unbeatable offer for triple glazed windows.

May 31, 2015

“Why Should We Ask Users? Steve Jobs Didn’t!”

Posted in research tagged , , , , at 3:50 pm by mariawolters

In other words, if Apple designs beautiful hardware and software without “asking users what they would like”, we don’t need to ask users what they would like, either.

This way of thinking is a fallacy for two reasons.

Reason 1: It’s not about asking users what they would like, it’s about finding out what users need.

If you don’t give users what they actually need, but what you think they need, then in the best case, nobody buys your product, in the worst case, people die. (See my previous blog post on how a new system for ordering medications in a children’s Intensive Care Emergency Department led to more (entirely preventable) deaths.)

Finding out what users need is hard. You can’t do it by letting your imagination run wild; you need to go into the field, look at the context in which your solutions will be used, how people work right now, and how your proposed solutions might change the way people work for the better or for the worse.

Often, this also involves talking to people, that’s true. But when you talk to people, it’s not so much about what they think should be done, or about what they like or dislike. Rather, likes, dislikes, and suggested solutions are important clues to what users actually need.

Reason 2: You are not Steve Jobs, and neither are you Jony Ives or Tim Cook.1

Apple succeeds because they create tools that make some people’s lives better, and that give some people what they need. The genius of people like Ives and Jobs lies in their ability to discern what needs to be done – and then they work until they’ve got it right.

*** *** ***

1 I am assuming that the probability of the real Jony Ives or Tim Cook reading this post is close to zero.

May 29, 2015

Blogging ICPhS

Posted in research tagged , , , , at 2:18 pm by mariawolters

As those of you who follow me on Twitter or are Facebook friends with me, I’ve been part of the local programme committee of the International Congress of Phonetic sciences 2015 in Glasgow, and my role was to draft the oral programme, with steadfast support from Glasgow phonetician Rachel Smith.

In the following weeks, I will give you an insight into the way the programme was put together and explain some of the constraints we faced, the tools we used, and the decisions we made.

As ICPhS draws ever closer, I will start to highlight interesting sessions and feature phonetics bloggers and tweeters.

Kicking off, the next post (to be posted in two hours) is a plea for help from fellow Social Media junkies. If you have any comments, or ideas for what you would like to see featured in future posts, please leave a comment or tweet me (@mariawolters).

May 17, 2015

The Craft of Usable eHealth

Posted in research tagged , , , at 6:19 pm by mariawolters

On the surface, usability is simple. “If the user can’t use it, then it doesn’t work at all”, as Susan Dray likes to say. But what does that mean in practice?

In health care, you have a large number of patients, a very small, finite number of health care practitioners, the cost of looking after these patients and providing them with the medications and therapy they need, and an empty purse.

And the demand for care is growing ever stronger. Thanks to the wonders of modern medicine, prevention, sanitation, and vaccinations, more people live longer, more people survive illnesses that would have otherwise killed them, and more people survive lifestyle choices that would have killed or crippled them fifty years ago.

eHealth promises to help. When the demand for skilled labour far outstrips its availability, technology can close the gap.

But eHealth technology will only work if people use it, and people will only use it if it works for them.

What does it mean for an eHealth system to be usable? In this post, I want to look at a somewhat iconoclastic discussion of the term usability by Gilbert Cockton, because it questions what I believe to be a dangerous myth in eHealth advocacy, the myth that people are the biggest barrier to successful implementation of telehealth.

They are not a barrier – they are the key.

Cockton summarises the standard view of usability thus:

  1. “Usability is an inherent measurable property of all interactive digital technologies

  2. Human-Computer Interaction researchers and Interaction Design professionals have developed evaluation methods that determine whether or not an interactive system or device is usable.

  3. Where a system or device is usable, usability evaluation methods also determine the extent of its usability, through the use of robust, objective and reliable metrics

  4. Evaluation methods and metrics are thoroughly documented in the Human-Computer Interaction research and practitioner literature. People wishing to develop expertise in usability measurement and evaluation can read about these methods, learn how to apply them, and become proficient in determining whether or not an interactive system or device is usable, and if so, to what extent.”

Vendors of eHealth systems who subscribe to this definition of usability will therefore (ideally) do the following:

A. Define a set of metrics that characterises the usability of their system

B. Conduct studies with all people who will use the system using appropriate methods in order to establish the usability of their system in terms of the specified metrics

The problem is that this is only the beginning. eHealth systems are used by people in specific contexts. Many of these contexts have features that cannot be foreseen by the original developers. People will adapt their use of those systems to the context and their own needs, a process that is known as appropriation in Human Computer Interaction.

Take for example a videoconferencing system that links people with their health care providers from the comfort of their own homes. The system has passed all objective and subjective metrics with flying colours, is easy to use, and has a mobile version, but requires a fast broadband connection.

User Jane McHipster lives on the waterfront in a loft with high ceilings. She has excellent broadband, so her GP can always see her clearly, but the sound is another matter. When the conversation turns to Jane’s mental health, the GP can barely hear her properly. But Jane is too ill to leave her house and come to the practice.

User June McHuckster, on the other hand, lives on a remote croft. Her Internet access comes through her smartphone contract, with the only provider who has good coverage of her home village. Her GPs used to call her regularly, but switched to the video system so they could see her, too. The picture quality is bad, and conversations often stop and start. June is so frustrated with the system that she will often tell the GP she’s fine just to cut the conversation short. This also leaves more of June’s limited broadband capacity for Skyping with her family, who live thousands of miles away.

Jim McSweeney is June’s next door neighbour. He also has family a thousand miles away, and the same smartphone contract. He has the same issues with conversations stopping and restarting, but for him, they don’t matter. He enjoys the banter with his GP when the connection breaks down yet again, loves being able to show instead of having to tell, and thanks the system for saving him from many a long and boring trip to the GP surgery.

*** *** ***

After thorough discussion of the literature on usability and usability evaluation, Cockton concludes in Section 15.5.3 that

  1. There are fundamental differences on the nature of usability, i.e., it is either an inherent property of interactive systems, or an emergent property of usage. There is no single definitive answer to what usability ‘is’. […]

  2. There are no universal measures of usability, and no fixed thresholds above or below which all interactive systems are or are not usable. […]

  3. Usability work is too complex and project-specific to admit generalisable methods. What are called ‘methods’ are more realistically ‘approaches’ that provide loose sets of resources that need to be adapted and configured on a project by project basis.”

Jane, June, and Jim have shown how usability emerges from the context in which the system is being used. In Jane’s case, the system works fine, but there are unexpected difficulties due to her living space. In June’s case, the system is hard to use, and it’s not worth it for her. In Jim’s case, the system is his salvation.

But if there is no one clear usability metric, then what are practitioners to do?

The first step is to genuinely listen to people’s concerns. Next steps and solutions will again vary by context.

For example, Jane could order a headset online, which would make her much easier to understand. June could shut off the video component of the consultation software, which consumes bandwidth and leads to most crashes, and only switch it back on again if the GP really needs to see her.

No rarely means never – in most cases, it means not specifically this, not right now, not right here. It is up to us to decipher it, and to design the interaction between human and eHealth system so we can get from no to yes.

The Promise and Perils of Computerised Prescription Systems

Posted in research tagged , , , , , , at 2:00 pm by mariawolters

ResearchBlogging.org

Prescribing medications to sick people is a difficult task. The person prescribing needs to choose the right medication, choose the right dose, choose the right timing for delivering those doses, and check whether the medication will interact with any other medications that the patient might already be on.

Clearly, computerised prescription order entry systems (or CPOE) systems have vast potential benefits here. Computers are much better than humans at storing masses of information. In principle, computer systems allow much faster and better access to all kinds of records, which means no more rustling through paper records distributed across several locations.

What’s more, CPOE also allows better stock management. Once medication has been ordered, the system knows exactly how much is needed, how much is still in stock, and can create valuable data sets that can be used to optimise stock management and anticipate demands.

CPOE also generates a data stream that can make it easy to audit prescription patterns and compare those patterns to best practice and evidence-based guidelines.

In short, CPOE is a win-win proposition, and if there is a module that fits with an existing medical record system, there’s no reason why it should not be implemented quickly and efficiently.

That’s what one children’s hospital thought. They were linked to a University Hospital System and treated many children who required urgent access to top specialist medical care. So they rolled out CPOE.

And then, the children died.

In the words of Han and coauthors:

Univariate analysis revealed that mortality rate significantly increased from 2.80% (39 of 1394) before CPOE implementation to 6.57% (36 of 548) after CPOE implementation. Multivariate analysis revealed that CPOE remained independently associated with increased odds of mortality (odds ratio: 3.28; 95% confidence interval: 1.94–5.55) after adjustment for other mortality covariables.“ (from the abstract)

The authors looked at the data first. They surveyed all children who were transferred to their hospital’s Intensive Care Unit from other hospitals within a time span of 18 months, 12 before and 6 after CPOE introduction. Then, they looked for the reasons.

These children were a special case. They needed the correct treatment, fast. Over the years, the hospital ICU team had evolved procedures that enabled them to be as fast as possible. They were as finely tuned as the team changing the wheels on a Formula 1 racing car.

The new system destroyed these processes, because it was slow. Before, doctors would pass quick written notes to nurses, who were always on the lookout for new instructions. Now, it took up to ten clicks to enter a medication order. Low bandwidth then added another delay until the order was transmitted to the pharmacists. Before, everybody was free to help tend to the patient, if needed. Now, one member of staff had to be at the computer, tending to the CPOE system. Before, staff could just grab what they needed to stabilise the patient. Now, everything went through central ordering.

With hindsight, it is easy to criticise the hospital team for what seems to be a rushed introduction of a system that was not ready for prime time. But if you look at the hype surrounding much of telehealth and telemedicine (“Act now! We know it works! You OWE it to your PATIENTS! (And to the taxpayers …)“), it is easy to see how this might have happened.

You will often hear telemedicine and eHealth evangelists say that the world could be so much better and brighter if it weren’t for those pesky practitioners who are clinging on to the old way of doing things.

In this case, the old way of getting medication to very sick children on arrival in the hospital ICU was actually working very well. Speed, and having as many hands as possible on deck, were essential.

The new way, with its ten clicks to achieve a single order, was more suitable for a situation where prescriptions were not urgent, where safety was paramount, and where there was spare personnel to focus on data entry.

In short, the new way was not usable.

Usability is far more than “do people like it?”. At the very minimum, per ISO 9241 definition, a usable system has to do what it is designed to do (effectiveness), and it has to do so with an appropriate speed (efficiency). If the users like it, that’s nice (user satisfaction), but it’s far from the whole story.

The key point where the CPOE system that Han and colleagues describe fell down was efficiency, which made it unsuitable for the task.

In theory, CPOE is a great idea, but it has to be usable in practice. Otherwise, it just won’t work.

ResearchBlogging.org Han, Y. et al. (2005). Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician Order Entry System PEDIATRICS, 116 (6), 1506-1512 DOI: 10.1542/peds.2005-1287

Next page