June 1, 2015

Skype for Video Consultations – A Personal Perspective

Posted in research tagged , , , , , , , , at 10:00 am by mariawolters

I was motivated to write this short piece by looking through the material for the Remote Consulting unit of the Telehealth and Telemedicine course for the Edinburgh MSc in Global eHealth.

Helen Atherton, an active researcher in email consulting, created a fascinating set of resources on the topic for the students of that course, which I co-organise with Brian McKinstry (i.e.: Brian provides the wisdom of (sometimes bitter) experience, I implement and add my two cents from a Human Computer Interaction point of view.).

One of the topics that came up was the use of Skype for remote consultation. Skype is a good alternative to traditional phone consultations because

  • everybody can sign up for free
  • in situations where you need video, it is easy to switch on
  • it can be used by people who do not have a landline or access to a landline phone
  • it can be used anywhere with WiFi access, which means that people do not have to use or pay for call minutes

But from my own experience, there are two important issues here that make me question whether Skype is suitable for video consultations.

1) Is Skype stable?

Not really, especially not if you use the video facility. I am typically online via fast Wifi at work courtesy of eduroam (yes, University of Edinburgh eduroam works well!), and I never have any trouble uploading or downloading big papers, large data sets, or Apple system updates. But when I’m asked to take part in a Skype meeting, I will never switch on video unless the other party insists, because that is a recipe for disaster.

I haven’t systematically kept track of the number of times a multi party Skype call failed because one of the people had switched on video, and worked well once the video had been switched off, but I’d guess this has happend in about half the Skype conferences (with video) that I have been involved in.

2) Is Skype safe?

I am not going to start discussing privacy features and whether conversations can be overheard by third parties here – that’s a whole other topic which is best discussed by somebody with expertise in the area.

What I mean is safety from unsavoury contacts. While my Skype handle is gender neutral (mkwolters), I have my full name associated with it, and my name is searchable, so that collaborators who wish to add me can easily find me on Skype. I also have a portrait photo with my own face, which clearly marks me as a female.

This means that every week or so, I get a contact request from a random account pretending to be a man. Half of these use an icon that would suggest that they are a member of the US Army, and are looking to talk to somebody while on active duty. The only time I was accosted by an account that pretended to be a woman, the person was recruiting for the web cam version of phone sex, which only became clear after a longer exchange. (I like to see what’s behind those scammers. I’m nosy like that.)

A good friend of mine (male) who has locked down his own Skype profile gets so many contact requests from women that he now refuses to leave his Skype open.

On one level, this is the Skype equivalent of the good old Nigerian scam or phishing email. On another level, I can see how this might make people highly uncomfortable. (It makes me extremely uncomfortable, and I’ve been on the Internet since 1994.)

It wasn’t always like that. Before the recent wave of scammers hit, I was on Skype for years with nary an incident. But the climate has changed, and I regard Skype as fundamentally unsafe.

*** *** ***

So, if I were a health care practitioner, offering telehealth consultations to older patients at home, would I be keen to introduce Skype video consulting?

Short answer: No.

Long answer: Not unless they already have a Skype account, are comfortable with using the service, are experts at fending off unwanted online attention, and have good experiences with one to one video calls.

I would not advise or expect older people to invest in Skype just to be able to access their health care from home – just as I wouldn’t advise them to spice up their social life by chatting to that nice man who has come by their door with an unbeatable offer for triple glazed windows.

May 31, 2015

“Why Should We Ask Users? Steve Jobs Didn’t!”

Posted in research tagged , , , , at 3:50 pm by mariawolters

In other words, if Apple designs beautiful hardware and software without “asking users what they would like”, we don’t need to ask users what they would like, either.

This way of thinking is a fallacy for two reasons.

Reason 1: It’s not about asking users what they would like, it’s about finding out what users need.

If you don’t give users what they actually need, but what you think they need, then in the best case, nobody buys your product, in the worst case, people die. (See my previous blog post on how a new system for ordering medications in a children’s Intensive Care Emergency Department led to more (entirely preventable) deaths.)

Finding out what users need is hard. You can’t do it by letting your imagination run wild; you need to go into the field, look at the context in which your solutions will be used, how people work right now, and how your proposed solutions might change the way people work for the better or for the worse.

Often, this also involves talking to people, that’s true. But when you talk to people, it’s not so much about what they think should be done, or about what they like or dislike. Rather, likes, dislikes, and suggested solutions are important clues to what users actually need.

Reason 2: You are not Steve Jobs, and neither are you Jony Ives or Tim Cook.1

Apple succeeds because they create tools that make some people’s lives better, and that give some people what they need. The genius of people like Ives and Jobs lies in their ability to discern what needs to be done – and then they work until they’ve got it right.

*** *** ***

1 I am assuming that the probability of the real Jony Ives or Tim Cook reading this post is close to zero.

May 29, 2015

Blogging ICPhS

Posted in research tagged , , , , at 2:18 pm by mariawolters

As those of you who follow me on Twitter or are Facebook friends with me, I’ve been part of the local programme committee of the International Congress of Phonetic sciences 2015 in Glasgow, and my role was to draft the oral programme, with steadfast support from Glasgow phonetician Rachel Smith.

In the following weeks, I will give you an insight into the way the programme was put together and explain some of the constraints we faced, the tools we used, and the decisions we made.

As ICPhS draws ever closer, I will start to highlight interesting sessions and feature phonetics bloggers and tweeters.

Kicking off, the next post (to be posted in two hours) is a plea for help from fellow Social Media junkies. If you have any comments, or ideas for what you would like to see featured in future posts, please leave a comment or tweet me (@mariawolters).

Active on Social Media? ICPhS Needs You!

Posted in icphs tagged , , , at 12:00 am by mariawolters

I swear – the first person to develop instantaneous human cloning will be a frustrated attendee of the International Congress of Phonetic Sciences (ICPhS).

ICPhS is the biggest gathering in phonetics. Every four years, phoneticians and speech scientists from all over the world (except Antarctica) meet for five days of phonetics, phonetics, and yet more phonetics.

The programme is usually packed. This year alone, we will have fifteen time slots for oral presentations, with up to 8 parallel sessions. Around 380 papers will be presented orally, the same number as posters.

This year, we have a new feature, organised by Bert Remijsen and Pavel Iosad – ten discussant sessions, where eminent phoneticians pick four particularly interesting papers and discuss them in a thematic session. For reasons I will explain in a later post, these sessions are organised in two blocks of five parallel sessions.

All of this is a surefire recipe for many, many frustrated phoneticians. One way of mitigating at least some of the frustration is social media.

I know from ICPhS 2011 in Hong Kong that many people are already prepared to tweet the sessions they attend, but I wonder what we could do if we were a bit more organised this time around.

Specifically, I am wondering whether people would be happy to commit in advance to reporting specific sessions on social media. This could be through live tweets, a blog post, a LinkedIn entry, a Facebook summary, a MySpace song … you get the idea.

What do you think? Could you help?

May 17, 2015

The Craft of Usable eHealth

Posted in research tagged , , , at 6:19 pm by mariawolters

On the surface, usability is simple. “If the user can’t use it, then it doesn’t work at all”, as Susan Dray likes to say. But what does that mean in practice?

In health care, you have a large number of patients, a very small, finite number of health care practitioners, the cost of looking after these patients and providing them with the medications and therapy they need, and an empty purse.

And the demand for care is growing ever stronger. Thanks to the wonders of modern medicine, prevention, sanitation, and vaccinations, more people live longer, more people survive illnesses that would have otherwise killed them, and more people survive lifestyle choices that would have killed or crippled them fifty years ago.

eHealth promises to help. When the demand for skilled labour far outstrips its availability, technology can close the gap.

But eHealth technology will only work if people use it, and people will only use it if it works for them.

What does it mean for an eHealth system to be usable? In this post, I want to look at a somewhat iconoclastic discussion of the term usability by Gilbert Cockton, because it questions what I believe to be a dangerous myth in eHealth advocacy, the myth that people are the biggest barrier to successful implementation of telehealth.

They are not a barrier – they are the key.

Cockton summarises the standard view of usability thus:

  1. “Usability is an inherent measurable property of all interactive digital technologies

  2. Human-Computer Interaction researchers and Interaction Design professionals have developed evaluation methods that determine whether or not an interactive system or device is usable.

  3. Where a system or device is usable, usability evaluation methods also determine the extent of its usability, through the use of robust, objective and reliable metrics

  4. Evaluation methods and metrics are thoroughly documented in the Human-Computer Interaction research and practitioner literature. People wishing to develop expertise in usability measurement and evaluation can read about these methods, learn how to apply them, and become proficient in determining whether or not an interactive system or device is usable, and if so, to what extent.”

Vendors of eHealth systems who subscribe to this definition of usability will therefore (ideally) do the following:

A. Define a set of metrics that characterises the usability of their system

B. Conduct studies with all people who will use the system using appropriate methods in order to establish the usability of their system in terms of the specified metrics

The problem is that this is only the beginning. eHealth systems are used by people in specific contexts. Many of these contexts have features that cannot be foreseen by the original developers. People will adapt their use of those systems to the context and their own needs, a process that is known as appropriation in Human Computer Interaction.

Take for example a videoconferencing system that links people with their health care providers from the comfort of their own homes. The system has passed all objective and subjective metrics with flying colours, is easy to use, and has a mobile version, but requires a fast broadband connection.

User Jane McHipster lives on the waterfront in a loft with high ceilings. She has excellent broadband, so her GP can always see her clearly, but the sound is another matter. When the conversation turns to Jane’s mental health, the GP can barely hear her properly. But Jane is too ill to leave her house and come to the practice.

User June McHuckster, on the other hand, lives on a remote croft. Her Internet access comes through her smartphone contract, with the only provider who has good coverage of her home village. Her GPs used to call her regularly, but switched to the video system so they could see her, too. The picture quality is bad, and conversations often stop and start. June is so frustrated with the system that she will often tell the GP she’s fine just to cut the conversation short. This also leaves more of June’s limited broadband capacity for Skyping with her family, who live thousands of miles away.

Jim McSweeney is June’s next door neighbour. He also has family a thousand miles away, and the same smartphone contract. He has the same issues with conversations stopping and restarting, but for him, they don’t matter. He enjoys the banter with his GP when the connection breaks down yet again, loves being able to show instead of having to tell, and thanks the system for saving him from many a long and boring trip to the GP surgery.

*** *** ***

After thorough discussion of the literature on usability and usability evaluation, Cockton concludes in Section 15.5.3 that

  1. There are fundamental differences on the nature of usability, i.e., it is either an inherent property of interactive systems, or an emergent property of usage. There is no single definitive answer to what usability ‘is’. […]

  2. There are no universal measures of usability, and no fixed thresholds above or below which all interactive systems are or are not usable. […]

  3. Usability work is too complex and project-specific to admit generalisable methods. What are called ‘methods’ are more realistically ‘approaches’ that provide loose sets of resources that need to be adapted and configured on a project by project basis.”

Jane, June, and Jim have shown how usability emerges from the context in which the system is being used. In Jane’s case, the system works fine, but there are unexpected difficulties due to her living space. In June’s case, the system is hard to use, and it’s not worth it for her. In Jim’s case, the system is his salvation.

But if there is no one clear usability metric, then what are practitioners to do?

The first step is to genuinely listen to people’s concerns. Next steps and solutions will again vary by context.

For example, Jane could order a headset online, which would make her much easier to understand. June could shut off the video component of the consultation software, which consumes bandwidth and leads to most crashes, and only switch it back on again if the GP really needs to see her.

No rarely means never – in most cases, it means not specifically this, not right now, not right here. It is up to us to decipher it, and to design the interaction between human and eHealth system so we can get from no to yes.

The Promise and Perils of Computerised Prescription Systems

Posted in research tagged , , , , , , at 2:00 pm by mariawolters

ResearchBlogging.org

Prescribing medications to sick people is a difficult task. The person prescribing needs to choose the right medication, choose the right dose, choose the right timing for delivering those doses, and check whether the medication will interact with any other medications that the patient might already be on.

Clearly, computerised prescription order entry systems (or CPOE) systems have vast potential benefits here. Computers are much better than humans at storing masses of information. In principle, computer systems allow much faster and better access to all kinds of records, which means no more rustling through paper records distributed across several locations.

What’s more, CPOE also allows better stock management. Once medication has been ordered, the system knows exactly how much is needed, how much is still in stock, and can create valuable data sets that can be used to optimise stock management and anticipate demands.

CPOE also generates a data stream that can make it easy to audit prescription patterns and compare those patterns to best practice and evidence-based guidelines.

In short, CPOE is a win-win proposition, and if there is a module that fits with an existing medical record system, there’s no reason why it should not be implemented quickly and efficiently.

That’s what one children’s hospital thought. They were linked to a University Hospital System and treated many children who required urgent access to top specialist medical care. So they rolled out CPOE.

And then, the children died.

In the words of Han and coauthors:

Univariate analysis revealed that mortality rate significantly increased from 2.80% (39 of 1394) before CPOE implementation to 6.57% (36 of 548) after CPOE implementation. Multivariate analysis revealed that CPOE remained independently associated with increased odds of mortality (odds ratio: 3.28; 95% confidence interval: 1.94–5.55) after adjustment for other mortality covariables.“ (from the abstract)

The authors looked at the data first. They surveyed all children who were transferred to their hospital’s Intensive Care Unit from other hospitals within a time span of 18 months, 12 before and 6 after CPOE introduction. Then, they looked for the reasons.

These children were a special case. They needed the correct treatment, fast. Over the years, the hospital ICU team had evolved procedures that enabled them to be as fast as possible. They were as finely tuned as the team changing the wheels on a Formula 1 racing car.

The new system destroyed these processes, because it was slow. Before, doctors would pass quick written notes to nurses, who were always on the lookout for new instructions. Now, it took up to ten clicks to enter a medication order. Low bandwidth then added another delay until the order was transmitted to the pharmacists. Before, everybody was free to help tend to the patient, if needed. Now, one member of staff had to be at the computer, tending to the CPOE system. Before, staff could just grab what they needed to stabilise the patient. Now, everything went through central ordering.

With hindsight, it is easy to criticise the hospital team for what seems to be a rushed introduction of a system that was not ready for prime time. But if you look at the hype surrounding much of telehealth and telemedicine (“Act now! We know it works! You OWE it to your PATIENTS! (And to the taxpayers …)“), it is easy to see how this might have happened.

You will often hear telemedicine and eHealth evangelists say that the world could be so much better and brighter if it weren’t for those pesky practitioners who are clinging on to the old way of doing things.

In this case, the old way of getting medication to very sick children on arrival in the hospital ICU was actually working very well. Speed, and having as many hands as possible on deck, were essential.

The new way, with its ten clicks to achieve a single order, was more suitable for a situation where prescriptions were not urgent, where safety was paramount, and where there was spare personnel to focus on data entry.

In short, the new way was not usable.

Usability is far more than “do people like it?”. At the very minimum, per ISO 9241 definition, a usable system has to do what it is designed to do (effectiveness), and it has to do so with an appropriate speed (efficiency). If the users like it, that’s nice (user satisfaction), but it’s far from the whole story.

The key point where the CPOE system that Han and colleagues describe fell down was efficiency, which made it unsuitable for the task.

In theory, CPOE is a great idea, but it has to be usable in practice. Otherwise, it just won’t work.

ResearchBlogging.org Han, Y. et al. (2005). Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician Order Entry System PEDIATRICS, 116 (6), 1506-1512 DOI: 10.1542/peds.2005-1287

May 16, 2015

So, What is Psy Like as a Keynote Speaker?

Posted in Uncategorized tagged at 7:56 pm by mariawolters

At the recent CHI 2015 conference on Human-Computer Interaction in the heart of Gangnam District, Seoul, South Korea, we had a somewhat unexpected keynote speaker: Park Jae-Sang, also known as Psy, probably most famous for a certain song called Gangnam Style.

Why did I say Park Jae-Sang spoke? Because that’s who came. No glasses, no kooky clothes, no props, no silly dance. Just a man in a business suit and a microphone.

Park Jae-Sang is an ambitious man who is very conscious of the image he projects. At the same time, he tries to be himself. His constant jokes about his weight made it clear that he chose to stay his chubby self, despite K-Pop pressures to be slender and beautiful. His constant jokes about his English (which was excellent) showed his insistence on his Korean roots.

The tale he told was not of a Social Media Ninja, but of a shrewd business man whose associates told him about this YouTube thing – they convinced him, so he decided to explore it.

Then, he explored fame, and what it means to have a global hit. He concluded that Gangnam Style was a one-off, and the way forward for him is to be who he is – Park Jae-Sang, composer, businessman, artist, Korean, uniquely himself.

싸이.

April 29, 2014

Medication Reminders – What is the Minimal Effective Dose of Technology?

Posted in Uncategorized tagged , , , at 2:14 pm by mariawolters

As I type this, I am sitting on a park bench in Toronto’s Roundhouse park, a sanctuary for old steam and diesel engines in front of the state-of-the-art Metro Toronto Convention Centre.
I’m here with my daughter, who is currently busy repurposing a steam train playground installation as a boat. Behind me, traffic crawls along the Gardiner Expressway which cuts off the Waterfront and its condos and building sites from Downtown.

I am at the 2014 CHI conference on Computer-Human Interaction, which is one of the largest conferences on making technology useable.

On Monday morning, the conference opened with a very thought provoking keynote by Margaret Atwood on robots, technology, and humans. One of the many points she made was about unexpected perspectives on familiar technologies. Just as my daughter converted a train into a boat, technology is invented for one purpose, but then can serve many others. The true potential of a thing is an unknown unknown, in Donald Rumsfeld’s words. It’s a wide open space, limited only by creativity and serendipity.

At alt.chi (on Tuesday before lunch), I’m going to argue that what helps us remember to take our medications is not shiny new purpose built apps – rather, we need to delve into the unknowns and be creative, so that remembering medications is as little work as possible.

(The mathematically inclined readers among you can now imagine using your favourite approach to minimising a differential equation.)

As Juliet Corbin and Anselm Strauss argued in a series of seminal papers, there are three layers of work associated with illness. First, there’s the illness work proper – taking medications, doing prescribed exercises. Then, there’s everyday work – roughly, getting on with your life while being ill. Finally, there’s biographical work – work on your own identity and values. Not to mention that being ill means that you are drained and, by definition, not able to function at your best.

Illnesses create additional everyday and biographical work. Take people with diabetes. They need to schedule regular checkups with their health care providers, take prescribed pills, and remember to refill their prescriptions on time. They may need to overhaul the way they eat. This can mean spending more time preparing and sourcing foods that won’t aggravate their illness – so more everyday work. Finally, they need to come to terms with their diagnosis. Often, they will need medication for the rest of their lives. They need to cut back radically on cakes and sweets. What’s worse, in public discourse, people with diabetes are often stigmatised as fat slobs who ate themselves sick.

So, assume they forget their pills. Let’s just install a smartphone app, shall we? But what about people who struggle to work their phones (and they’re not all elderly)? What about people who rarely use their phones (again, they’re not all elderly)? There are many reasons why smartphone apps can and will fail – and a common denominator of many of them is that using those apps (indeed, using a smartphone) is too much work.

Work on top of work, while the person who has to do the work is not at their best.

Let’s be honest – how do you remember to take your medication? Do you use technology? If it works for you, great.

But what if it doesn’t?

What if the most effective dose of technology is not one app, but none?

People, we need to talk.

Reference:

1. Corbin, J., and Strauss, A. Managing chronic illness at home: Three lines of work. Qualitative Sociology 8, 3 (1985), 224–247

Maria Wolters (2014). The Minimally Effective Dose of Reminder Tehcnology Proceedings of CHI 2014 – alt.chi DOI: 10.1145/2559206.2578878

March 29, 2013

Archiving without the Clutter – We Need Your Help!

Posted in research tagged , , , at 3:02 pm by mariawolters

Archiving our digital life without clutter is a dream for many of us, as we accumulate a steady stream of data, pictures, books, and videos and have to rely on intelligent searching and serendipity to find what they need.

Can we do better than this? A group of European researchers and companies hopes that we can.

I’m looking to interview people via email as part of an EU project, Forget IT (very preliminary home page), that aims to develop intelligent archiving solutions. The premise of the project is that forgetting, far from being a scourge of humankind, is actually useful, because it allows us to remember what’s relevant and filter out what is not so relevant. When we forget something, that doesn’t mean that all traces of it are wiped from our brain. Old memories can resurface in unexpected moments given the right cue.

One of the scenarios we are looking to explore is managing digital photo collections. With the advent of digital cameras and camera phones, the practice of photography has changed. People take more photos in more situations, but how do they store, archive, and access the mass of images? Can we learn tricks from human memory that allow us to ensure that relevant photos are easy to find, while irrelevant photos can be safely forgotten and eventually deleted?

Although there is a lot of scientific research on this topic, some of which I will blog here in the following months, nothing beats hearing from people directly. What should a photo archive that allows intelligent forgetting look like? How would you like to access these photos? What about privacy issues?

If you are interested in taking part in the study, contact me at mariaDOTwoltersATedDOTacDOTuk (replace AT with @, DOT with .). The interview should not take more than an hour of your time all in all. You will be able to respond as and when you like. Participants will receive a “Thank You” eCard for their participation. I am happy to answer any questions about this piece of research either in the comments or via email.

And if you’d like to hear more about the project and the research we’re exploring, watch this space! I may even attempt to explain the difference between back ups and archives.

The Small Print: The interview will take place via my university email account, which is held in the UK. Interview emails will be deleted from my account as soon as they are completed. All email text will be fully anonymised and stored on a secure drive on University of Edinburgh servers before making them accessible to other researchers in the project. The only parts of the original email headers that will be kept will be the date and time emails were sent. You are free to end the interview at any time and withdraw your contributions at any time after the interview has finished.

This study has Ethical Approval from the Psychology Research Ethics Committee, University of Edinburgh, Reference No 145-1213/3. The local principal investigator is Prof Robert Logie, my postdoctoral accomplice is Dr Elaine Niven, and we’re all part of the Human Cognitive Neurosciences group in the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh. (I’m only partially assimilated; most of my time is still spent at the School of Informatics)

Edited to fix link and Robert Logie’s first name.

June 27, 2012

Building a Network Means Knowing Who You Talk To

Posted in Uncategorized at 9:33 am by mariawolters

Why do people follow others on Twitter, only to require them to validate with a service like TrueTwit when they wish to follow back?

When I follow somebody, I make that decision based on interacting with them, having a look at their timeline, or finding them in follower lists of people I rate. I will also follow back everybody who follows me who is not a bot and who is not purely a professional social media marketer. (I don’t even care whether I speak their language. I’m a linguist, I love seeing Babel in my timeline.) I’m not online to promote my research or my business, I’m online to build relationships with people. My timeline is a wonderfully diverse crowd, from Tories to Greens, from strict Catholics to adamant atheists.

So if you take a considered decision to follow somebody, then why would you require the new addition to your timeline to confirm that they are not a bot? The only situation I can see is people adding followers automatically through online services. But if you add people automatically, that’s usually a sign that you don’t interact. And broadcasting is not what social media is about. It’s an interaction, as Robert Fondalo, perhaps the only marketer I have ever followed back, expresses this succinctly in this post

Looking at somebody’s timeline is also important because this, to me, is the primary context in which tweets need to be interpreted. Tweets are 140 character messages, so much of the meaning and nuance needs to be implied. It’s usually quite clear how to read a tweet after having seen the person tweet for a couple of weeks. Failing to take such context into account is one of the major sources of fights and hissy fits on Twitter. At its worst, it can even lead to a two and a half year fight with the courts when a tweet that went out to 600+ followers, a tweet whose author regularly jokes and banters with the people on his timeline, and a tweet for which the contextually appropriate reading is “(bad) joke”, is taken completely out of context.

Social media is not broadcasting. It’s talking to people. Know your audience, and get to know people a little before you follow. It’s that simple.

Next page

Follow

Get every new post delivered to your Inbox.

Join 2,234 other followers