November 27, 2015

Designing Reminders that Older People Can Remember #BAAConf

Posted in research tagged , , , , at 12:22 pm by mariawolters

Reminders only work if you can hear them – as I found out to my cost this morning. I had been looking forward to a scrumptious Yorkshire breakfast, served from 7am to 10am, only to wake up at 10.17am.

Why did I sleep through my trusty phone alarm? Because my phone hadn’t been charging; I had forgotten to switch on the socket into which I had plugged it. (In the UK, we need to switch on sockets before they will provide electricity).

Now imagine that you can no longer hear the alarms you set not because you failed to charge your phone, but  because your hearing is going. What do you do?

This is what my talk at this year’s Conference of the British Academy of Audiology is all about. (The slides are on Slideshare, as always.)

I discuss a few strategies that I have discovered when working with older people as part of my research into human-computer interaction.

All of these ideas are inspired by what older people have told me and my colleagues, or by what we have seen them do.  This is perhaps the most important point of my talk. People are experts in what works for them. Very often, all it takes is a bit of active listening to uncover a solution that builds on their existing habits, their routines, and the layout of the spaces and places where they live.

This is really the most important trick – make the action to be remembered as natural and habitual as possible.

Once you have ensured that, the rest is icing on the cake:

  • ensure that people choose reminders that they actually choose to hear. (That includes reminders which are so irritating that you just have to get out of bed to silence them.)
  • ensure that people can understand what the reminder is all about. Again, you can take advantage of associations people already have. For example, people may choose a snippet from their favorite love song to remind them to take their heart medications
  • ensure that the reminders are not stigmatizing. It can be hard to admit that one’s memory is going, that one is no longer coping. Having one’s style cramped is even harder.

If you would like personalized advice or talk further, please do not hesitate to contact me via email (maria dot wolters at ed dot ac dot uk) or on Twitter (@mariawolters).

I also provide tailored consulting and training packages  at ehealth-tech-doctor.com.

 

Advertisements

May 17, 2015

The Craft of Usable eHealth

Posted in research tagged , , , at 6:19 pm by mariawolters

On the surface, usability is simple. “If the user can’t use it, then it doesn’t work at all”, as Susan Dray likes to say. But what does that mean in practice?

In health care, you have a large number of patients, a very small, finite number of health care practitioners, the cost of looking after these patients and providing them with the medications and therapy they need, and an empty purse.

And the demand for care is growing ever stronger. Thanks to the wonders of modern medicine, prevention, sanitation, and vaccinations, more people live longer, more people survive illnesses that would have otherwise killed them, and more people survive lifestyle choices that would have killed or crippled them fifty years ago.

eHealth promises to help. When the demand for skilled labour far outstrips its availability, technology can close the gap.

But eHealth technology will only work if people use it, and people will only use it if it works for them.

What does it mean for an eHealth system to be usable? In this post, I want to look at a somewhat iconoclastic discussion of the term usability by Gilbert Cockton, because it questions what I believe to be a dangerous myth in eHealth advocacy, the myth that people are the biggest barrier to successful implementation of telehealth.

They are not a barrier – they are the key.

Cockton summarises the standard view of usability thus:

  1. “Usability is an inherent measurable property of all interactive digital technologies

  2. Human-Computer Interaction researchers and Interaction Design professionals have developed evaluation methods that determine whether or not an interactive system or device is usable.

  3. Where a system or device is usable, usability evaluation methods also determine the extent of its usability, through the use of robust, objective and reliable metrics

  4. Evaluation methods and metrics are thoroughly documented in the Human-Computer Interaction research and practitioner literature. People wishing to develop expertise in usability measurement and evaluation can read about these methods, learn how to apply them, and become proficient in determining whether or not an interactive system or device is usable, and if so, to what extent.”

Vendors of eHealth systems who subscribe to this definition of usability will therefore (ideally) do the following:

A. Define a set of metrics that characterises the usability of their system

B. Conduct studies with all people who will use the system using appropriate methods in order to establish the usability of their system in terms of the specified metrics

The problem is that this is only the beginning. eHealth systems are used by people in specific contexts. Many of these contexts have features that cannot be foreseen by the original developers. People will adapt their use of those systems to the context and their own needs, a process that is known as appropriation in Human Computer Interaction.

Take for example a videoconferencing system that links people with their health care providers from the comfort of their own homes. The system has passed all objective and subjective metrics with flying colours, is easy to use, and has a mobile version, but requires a fast broadband connection.

User Jane McHipster lives on the waterfront in a loft with high ceilings. She has excellent broadband, so her GP can always see her clearly, but the sound is another matter. When the conversation turns to Jane’s mental health, the GP can barely hear her properly. But Jane is too ill to leave her house and come to the practice.

User June McHuckster, on the other hand, lives on a remote croft. Her Internet access comes through her smartphone contract, with the only provider who has good coverage of her home village. Her GPs used to call her regularly, but switched to the video system so they could see her, too. The picture quality is bad, and conversations often stop and start. June is so frustrated with the system that she will often tell the GP she’s fine just to cut the conversation short. This also leaves more of June’s limited broadband capacity for Skyping with her family, who live thousands of miles away.

Jim McSweeney is June’s next door neighbour. He also has family a thousand miles away, and the same smartphone contract. He has the same issues with conversations stopping and restarting, but for him, they don’t matter. He enjoys the banter with his GP when the connection breaks down yet again, loves being able to show instead of having to tell, and thanks the system for saving him from many a long and boring trip to the GP surgery.

*** *** ***

After thorough discussion of the literature on usability and usability evaluation, Cockton concludes in Section 15.5.3 that

  1. There are fundamental differences on the nature of usability, i.e., it is either an inherent property of interactive systems, or an emergent property of usage. There is no single definitive answer to what usability ‘is’. […]

  2. There are no universal measures of usability, and no fixed thresholds above or below which all interactive systems are or are not usable. […]

  3. Usability work is too complex and project-specific to admit generalisable methods. What are called ‘methods’ are more realistically ‘approaches’ that provide loose sets of resources that need to be adapted and configured on a project by project basis.”

Jane, June, and Jim have shown how usability emerges from the context in which the system is being used. In Jane’s case, the system works fine, but there are unexpected difficulties due to her living space. In June’s case, the system is hard to use, and it’s not worth it for her. In Jim’s case, the system is his salvation.

But if there is no one clear usability metric, then what are practitioners to do?

The first step is to genuinely listen to people’s concerns. Next steps and solutions will again vary by context.

For example, Jane could order a headset online, which would make her much easier to understand. June could shut off the video component of the consultation software, which consumes bandwidth and leads to most crashes, and only switch it back on again if the GP really needs to see her.

No rarely means never – in most cases, it means not specifically this, not right now, not right here. It is up to us to decipher it, and to design the interaction between human and eHealth system so we can get from no to yes.

The Promise and Perils of Computerised Prescription Systems

Posted in research tagged , , , , , , at 2:00 pm by mariawolters

ResearchBlogging.org

Prescribing medications to sick people is a difficult task. The person prescribing needs to choose the right medication, choose the right dose, choose the right timing for delivering those doses, and check whether the medication will interact with any other medications that the patient might already be on.

Clearly, computerised prescription order entry systems (or CPOE) systems have vast potential benefits here. Computers are much better than humans at storing masses of information. In principle, computer systems allow much faster and better access to all kinds of records, which means no more rustling through paper records distributed across several locations.

What’s more, CPOE also allows better stock management. Once medication has been ordered, the system knows exactly how much is needed, how much is still in stock, and can create valuable data sets that can be used to optimise stock management and anticipate demands.

CPOE also generates a data stream that can make it easy to audit prescription patterns and compare those patterns to best practice and evidence-based guidelines.

In short, CPOE is a win-win proposition, and if there is a module that fits with an existing medical record system, there’s no reason why it should not be implemented quickly and efficiently.

That’s what one children’s hospital thought. They were linked to a University Hospital System and treated many children who required urgent access to top specialist medical care. So they rolled out CPOE.

And then, the children died.

In the words of Han and coauthors:

Univariate analysis revealed that mortality rate significantly increased from 2.80% (39 of 1394) before CPOE implementation to 6.57% (36 of 548) after CPOE implementation. Multivariate analysis revealed that CPOE remained independently associated with increased odds of mortality (odds ratio: 3.28; 95% confidence interval: 1.94–5.55) after adjustment for other mortality covariables.“ (from the abstract)

The authors looked at the data first. They surveyed all children who were transferred to their hospital’s Intensive Care Unit from other hospitals within a time span of 18 months, 12 before and 6 after CPOE introduction. Then, they looked for the reasons.

These children were a special case. They needed the correct treatment, fast. Over the years, the hospital ICU team had evolved procedures that enabled them to be as fast as possible. They were as finely tuned as the team changing the wheels on a Formula 1 racing car.

The new system destroyed these processes, because it was slow. Before, doctors would pass quick written notes to nurses, who were always on the lookout for new instructions. Now, it took up to ten clicks to enter a medication order. Low bandwidth then added another delay until the order was transmitted to the pharmacists. Before, everybody was free to help tend to the patient, if needed. Now, one member of staff had to be at the computer, tending to the CPOE system. Before, staff could just grab what they needed to stabilise the patient. Now, everything went through central ordering.

With hindsight, it is easy to criticise the hospital team for what seems to be a rushed introduction of a system that was not ready for prime time. But if you look at the hype surrounding much of telehealth and telemedicine (“Act now! We know it works! You OWE it to your PATIENTS! (And to the taxpayers …)“), it is easy to see how this might have happened.

You will often hear telemedicine and eHealth evangelists say that the world could be so much better and brighter if it weren’t for those pesky practitioners who are clinging on to the old way of doing things.

In this case, the old way of getting medication to very sick children on arrival in the hospital ICU was actually working very well. Speed, and having as many hands as possible on deck, were essential.

The new way, with its ten clicks to achieve a single order, was more suitable for a situation where prescriptions were not urgent, where safety was paramount, and where there was spare personnel to focus on data entry.

In short, the new way was not usable.

Usability is far more than “do people like it?”. At the very minimum, per ISO 9241 definition, a usable system has to do what it is designed to do (effectiveness), and it has to do so with an appropriate speed (efficiency). If the users like it, that’s nice (user satisfaction), but it’s far from the whole story.

The key point where the CPOE system that Han and colleagues describe fell down was efficiency, which made it unsuitable for the task.

In theory, CPOE is a great idea, but it has to be usable in practice. Otherwise, it just won’t work.

ResearchBlogging.org Han, Y. et al. (2005). Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician Order Entry System PEDIATRICS, 116 (6), 1506-1512 DOI: 10.1542/peds.2005-1287