March 29, 2013
Archiving our digital life without clutter is a dream for many of us, as we accumulate a steady stream of data, pictures, books, and videos and have to rely on intelligent searching and serendipity to find what they need.
Can we do better than this? A group of European researchers and companies hopes that we can.
I’m looking to interview people via email as part of an EU project, Forget IT (very preliminary home page), that aims to develop intelligent archiving solutions. The premise of the project is that forgetting, far from being a scourge of humankind, is actually useful, because it allows us to remember what’s relevant and filter out what is not so relevant. When we forget something, that doesn’t mean that all traces of it are wiped from our brain. Old memories can resurface in unexpected moments given the right cue.
One of the scenarios we are looking to explore is managing digital photo collections. With the advent of digital cameras and camera phones, the practice of photography has changed. People take more photos in more situations, but how do they store, archive, and access the mass of images? Can we learn tricks from human memory that allow us to ensure that relevant photos are easy to find, while irrelevant photos can be safely forgotten and eventually deleted?
Although there is a lot of scientific research on this topic, some of which I will blog here in the following months, nothing beats hearing from people directly. What should a photo archive that allows intelligent forgetting look like? How would you like to access these photos? What about privacy issues?
If you are interested in taking part in the study, contact me at mariaDOTwoltersATedDOTacDOTuk (replace AT with @, DOT with .). The interview should not take more than an hour of your time all in all. You will be able to respond as and when you like. Participants will receive a “Thank You” eCard for their participation. I am happy to answer any questions about this piece of research either in the comments or via email.
And if you’d like to hear more about the project and the research we’re exploring, watch this space! I may even attempt to explain the difference between back ups and archives.
The Small Print: The interview will take place via my university email account, which is held in the UK. Interview emails will be deleted from my account as soon as they are completed. All email text will be fully anonymised and stored on a secure drive on University of Edinburgh servers before making them accessible to other researchers in the project. The only parts of the original email headers that will be kept will be the date and time emails were sent. You are free to end the interview at any time and withdraw your contributions at any time after the interview has finished.
This study has Ethical Approval from the Psychology Research Ethics Committee, University of Edinburgh, Reference No 145-1213/3. The local principal investigator is Prof Robert Logie, my postdoctoral accomplice is Dr Elaine Niven, and we’re all part of the Human Cognitive Neurosciences group in the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh. (I’m only partially assimilated; most of my time is still spent at the School of Informatics)
Edited to fix link and Robert Logie’s first name.
June 27, 2012
Why do people follow others on Twitter, only to require them to validate with a service like TrueTwit when they wish to follow back?
When I follow somebody, I make that decision based on interacting with them, having a look at their timeline, or finding them in follower lists of people I rate. I will also follow back everybody who follows me who is not a bot and who is not purely a professional social media marketer. (I don’t even care whether I speak their language. I’m a linguist, I love seeing Babel in my timeline.) I’m not online to promote my research or my business, I’m online to build relationships with people. My timeline is a wonderfully diverse crowd, from Tories to Greens, from strict Catholics to adamant atheists.
So if you take a considered decision to follow somebody, then why would you require the new addition to your timeline to confirm that they are not a bot? The only situation I can see is people adding followers automatically through online services. But if you add people automatically, that’s usually a sign that you don’t interact. And broadcasting is not what social media is about. It’s an interaction, as Robert Fondalo, perhaps the only marketer I have ever followed back, expresses this succinctly in this post
Looking at somebody’s timeline is also important because this, to me, is the primary context in which tweets need to be interpreted. Tweets are 140 character messages, so much of the meaning and nuance needs to be implied. It’s usually quite clear how to read a tweet after having seen the person tweet for a couple of weeks. Failing to take such context into account is one of the major sources of fights and hissy fits on Twitter. At its worst, it can even lead to a two and a half year fight with the courts when a tweet that went out to 600+ followers, a tweet whose author regularly jokes and banters with the people on his timeline, and a tweet for which the contextually appropriate reading is “(bad) joke”, is taken completely out of context.
Social media is not broadcasting. It’s talking to people. Know your audience, and get to know people a little before you follow. It’s that simple.
June 9, 2012
Except that it showed no such thing, and the authors acknowledge this openly in the conclusions of their paper.
(Martin Robbins traces the Chinese Whispers succinctly at the Guardian.)
So What Was That Study About?
Many people report that exercise helps them keep the black dog of depression under control. If your depression is so severe that you can’t even get out of bed, exercise coaching does not make much sense, but for people who can still be active, exercise might work.
How do you implement exercise programmes? Do you shackle each participant to a treadmill for thirty minutes a day? Not if you would like people to take up regular exercise for the rest of their lives. What you do instead is provide advice and coaching. There’s an old saying: “If you want to feed a person for a day, give them a fish. If you want to feed them for the rest of their lives, teach them to fish.” This is how the intervention was designed.
What keeps people from exercising? What motivates them to get out and get active? What kind of exercise can people best fit into their own lives? The TREAD coaching programme covered all of this and more. People who went through TREAD coaching were able to work out a plan that allowed them to be more active. Roughly half the people in the study received TREAD coaching in addition to their usual care (i.e., antidepressants, counselling, or other exercise programmes, if available), the other half received usual care.
Did TREAD increase activity levels? It did – even after controlling for antidepressant use, baseline physical activity levels, and depression severity. No matter how bad participants were initially, they got more active, and the increase in activity often lasted for a year after treatment had finished. That’s impressive.
Did this increase in activity help with depression? Not really; any change in levels was small. After four and twelve months, people in the usual care group and people who got TREAD coaching got slowly better at a similar rate. The main measure was the Beck Depression Inventory, which goes from 0 to 63. Scores were 16.1 for the TREAD group versus 16.9 for the usual care group at four months, and 13.0 vs 13.5 at twelve months. A score from 0 to 13 is considered minimal, scores from 14 to 19 point to mild depression.
That could be a fluke due to a badly designed study, right?
How Good is the Study?
The study itself is sound – there’s no doubt about that. The physical activity coaching programme was carefully designed to incorporate effective strategies for changing people’s habits. It was brief and combined self-help elements with one-to-one coaching, which would have made it easy to implement within the NHS. People were followed up for much longer than usual (a year instead of four months). The statistical analysis is sound. TREAD worked very well in getting people to be more active.
For a more detailed, accessible evaluation of the study, see the summary prepared by the ever brilliant NHS Choices
So, Does That Mean Exercise Doesn’t Work?
Well, first of all, we are talking about physical activity – whatever that may mean for people. As the researchers found, definitions can vary greatly, from getting out of bed to running. Most physical activity was self-reported, because objective measures of how active people are in their daily lives are very difficult to get. The variety of activities people engaged in is actually a strength of this study, because different types of exercise are suited for different people. What’s more,
in the qualitative study, many participants talked about the way in which activity helped them. Again, it varies a lot. For some, it distracts from negative thoughts, for others, it builds self-esteem. The researchers also looked in detail at what physical activity means to people, and explored their experiences of TREAD in an interview study. Some people chose walking, others started rock climbing. Some took up gardening, others mention running.
Why didn’t exercise “work” then? There are several possible explanations. One is that it may only work for a certain type of person who can obtain mental benefits from the activity they have chosen. The other is the dose – maybe people need more intense activity more often. Finally, it’s worth emphasising again that TREAD was designed to promote physical activity per se, not regular sport or vigorous exercise that provides a good aerobic or anaerobic work out. People’s definitions of physical activity vary, and TREAD was designed to persuade people to be more active – whatever that might mean for them.
Don’t Forget – TREAD Itself Works!
In all the excitement about the lack of effect on depression, don’t forget one fantastic finding that is bound to be overlooked in the hoo-haa about the exercise and depression misreporting – TREAD works. People became more active, and they remained more active after one year. In addition, if there’s anything that will work systematically in the health system, it will be a coaching intervention like TREAD, because there is no one size fits all, everybody needs to find out for themselves what works for them.
So, if you’re a GP and want your patients to move more, look at the TREAD material; and if you have depression and would like to try and get more active, have a look, as well. It’s all in this lengthy report – the manual is at the end, but I would also recommend reading the report of the interview study.
The Limits of the RCT Paradigm
The study has raised the ire of many of us with depression who use exercise effectively (we think) to regulate our moods. But maybe it’s not physical activity as such that helps. Rather, activities such as walking, running, rock climbing or weightlifting might be embedded in a whole self-care package of things that work for us. What’s more, in order to exercise regularly, you need to prioritise it, which means that you need to care about yourself and your health. What if the key is to do not just any old activity, but to systematically train one or two activities that reliably shut down your racing thoughts and calm you down?
So let’s step away from the systematic reviews and formal trials for a moment. Let’s go back to the qualitative data, to the interviews and anecdotes, the observations and self-reports, and figure out where exercise sits within self-care, how often people who say exercise helps them are active, and what exactly it is that they do. In short, let’s revisit our hypotheses and look again.
An Anecdote (or Case Study)
I’ve had low moods for most of my life. I was diagnosed with depression at age 35. I was 37 when I discovered the right exercise prescription (weightlifting) and the right dose (2-4 times a week for at least an hour). Weightlifting works for me because
- it is an activity I can do – my gross motor dyspraxia prevents me from playing team sports, because any team that contains me will lose.
- it prevents me from ruminating, which means that the weight room becomes a safe space. This is something that no other physical activity can achieve.
- I can draw on gaining physical strength to replenish mental strength
- body image, something I have a major problem with, is not something lifters emphasise. What matters is how much you can lift, and the way you look will be a function of how and where you put on muscle.
- if you take lifting seriously, i.e. if you train, you also change other aspects of your life – you make sure to get enough rest and adequate nutrition, so that the musculoskeletal system can respond to the stimulus and grow strength.
Chalder, M., Wiles, N., Campbell, J., Hollinghurst, S., Haase, A., Taylor, A., Fox, K., Costelloe, C., Searle, A., Baxter, H., Winder, R., Wright, C., Turner, K., Calnan, M., Lawlor, D., Peters, T., Sharp, D., Montgomery, A., & Lewis, G. (2012). Facilitated physical activity as a treatment for depressed adults: randomised controlled trial BMJ, 344 (jun06 1) DOI: 10.1136/bmj.e2758
May 26, 2012
I agree with her. Although I’m a proud geekette, there are many paths to science, and many ways of living as a scientist. You don’t have to conform to anybody’s stereotype; all you need is a passion for finding out how stuff works.
But I would widen that approach to all genders and youth subcultures.
Turned off because boys who are into science are weak dweebs? Look at this geologist, and this computer scientist (double amputee and expert climber).
Think that female scientists are colourless and boring? Look at this computer scientist.
Do you feel disdain for people with a scientific bent because they don’t get the humanities? Look at this feminist and exercise scholar and think again.
Whoever you are, whatever you are, there will be a scientist out there who is just like you.
May 22, 2012
Neuroskeptic pointed out one instance where plagiarism may be forgivable – when authors whose first language is not English copy small passages from papers to put together their literature review
Now, if authors are clever enough to write an academic paper, their English should be good enough to summarise complex papers in a few words, right?
Wrong. Writing in a foreign language is very difficult, and writing a complex text such as an academic paper in a foreign language is a highly specialised skill. Native speakers of English, most of whom would not be able to write a paper in another language, don’t realise just how difficult this is. My own English is passable, but then I won a Second Prize in the Federal German Foreign Language Contest (there were several first and several second prizes) with English as my primary competition language, have been living in Scotland for 10+ years. And I still make mistakes.
Writing skills need to be maintained I entered the Contest as a 17-year-old with French as my second language. For my Abitur (A-levels), I wrote a long essay about the writer and philosopher Albert Camus in French – today, I can hardly string a blog post together, even though I can still read and understand Camus just fine. In fact, English is the only foreign language in which I can write papers; I would be utterly out of my league if I had to write in Spanish or French.
Now imagine that you don’t spend a lot of time writing English. All of a sudden, you need to put together a paper in the language. Writing well in one’s mother tongue is hard; finding the right turns of phrase in a foreign language is even harder, especially when there are strict page limits, and your field does not have very rigid structures for academic papers.r What do you do?
And, most importantly, as a reviewer, how do you help authors who struggle with their writing?
My own strategy represents a trade-off between time required to review the papers I’ve taken on and diligence. I point out major errors, in particular where terms are used in ways that prevent an English speaker from understanding what is meant, but I let most of the small things slide, in particular when my verdict is “revise and resubmit”. I then provide detailed feedback on the resubmission.
Ideally, journals would have mentors or specialised, paid editors that can help people who struggle with writing English; in the absence of such resources, I often recommend that authors have their papers proofread by a native speaker of English. I know that this can come across as condescending, especially if the authors have worked very hard to write an acceptable paper.
So, what can we do to address this problem as a community? Turning a blind eye to small instances of plagiarism? There are a couple of other options that are relatively inexpensive
- Develop clear language standards, and enforce them when reviewers whose native language is English expect literary masterpieces.
- Put together links on field-specific English for Academic Purposes that authors can access.
- Provide guidance on rewording results and findings from papers for the literature review that helps authors negotiate the line between reporting and plagiarism
- Provide reviewers with the option of submitting annotated PDFs of the paper together with their review – it’s very cumbersome to make a long list of page, line, and paragraph numbers, copy the bad wording, and type out the correction, especially if line numbers do not line up properly with the lines in the text (or when there are no line numbers at all)
What are your suggestions?
May 15, 2012
In circumstances where it can take clinicians and therapists a long time to reach the patients who need them, or where patients need to travel long distances in order to see a specialist, telemedicine comes into its own. Telecardiology or teleradiology allow specialists to receive and assess data from remote locations, giving instant feedback if the communication infrastructure is in place. Thus, telemedicine bridges a gap in knowledge and expertise.
This can be extended to medical education, as Mark Barr from Intel showed at the recent Med-e-Tel conference in Luxemburg. In his presentation, he drew up a knowledge pyramid, where specialists have the highest level of knowledge, followed by generalists, medical nurses, and health workers. Medical education was one of the ways of bridging this knowledge gap.
However, I think that there are really two knowledge pyramids – one of medical knowledge required to help the patient, and one of implementation knowledge required to make sure the patient can get the help they need and implement the required measures. Fitting them both together leads to a continuum where one source of knowledge increases and the other decreases.
For example, if the specialist recommends regular exercise, such as brisk walking for thirty minutes a day, the local health worker can tell people about good routes and point them to local walking groups, if they exist. If the specialist recommends a healthier diet, the health worker can help with suggestions of cheap, nutritious meals, local sources of good ingredients, or cookery classes.
The specialists do not need the local knowledge just like the health workers don’t need the specialist knowledge, but both ends of the continuum need to work together for best results. A top-down conceptualisation of telemedicine, where education just flows along the medical knowledge path, but not back along the implementation one, is – to my mind at least – deeply flawed.
May 8, 2012
There are many low and high tech solutions for tracking one’s moods and feelings, from the humble notebook to the shiny app, from detailed, free-form diaries to ticking a couple of boxes on a form. Many people track their mood informally using social media, letting their online support network know how they feel through blog posts, tweets, and status updates.
Moodscope is a web application that allows users to log and share their mood. Users can rate how you are feeling using 20 mood and emotion adjectives, such as scared, guilty, ashamed, proud, or alert. For each adjective, users need to specify whether they feel like this very slightly or not at all (0), a little (1), quite a bit (2), and extremely (3). This can be done as often as the user wishes.
The interface design is &ldots; unique. For each adjective, the user sees a card. On one side, there are the numbers 3 and 2, on the other side 0 and 1. The user then turns and spins the card until the number that corresponds to the current intensity of that particular emotion is on the top. To log that number, users click on the verbal description. At the end, scores are converted into percentages, where a high percentage reflects good mood, and a low percentage bad mood. After the first three recorded values, a summary is added that describes trends verbally and comments on how frequently Moodscope is used. Daily emails with short, motivating texts serve as reminders. These emails are sent whether or not the user has already completed a mood log that day.
The social aspect is very discreet. Users can nominate buddies who will be sent their Moodscope scores. Buddies need to confirm that they are willing to receive the email updates. All the buddy sees is the current percentage, but there’s an opportunity to discuss scores privately on the Moodscope web site.
One User’s Experience
I started using Moodscope a few weeks ago; I have only used the free version, and my comments may not apply to the paid version.
What first struck me was the pared down functionality. You confirm today’s date, and then it’s straight onto rating the twenty adjectives. Operating the rating interface is slow and cumbersome. You need good eye sight to make out the letters against the background (cards are either reddish or blueish), and the writing can be difficult to read. You also need to plan the flips and turns required in order to get the number you need to enter. This is a strain, and it definitely deters me from logging my mood more than once a day. I am not sure how well a person would do who is easily discouraged, or whose mood makes focusing very difficult.
The summary is not very helpful. As I am logging, my mood shows large swings, but that mostly depends on variation in my mood throughout the day. Moodscope as it stands misses this zig-zag, because the summary only compares my scores to the all time maximum, all time minimum, and average. If there is a large dip, the summary suggests that specific events may have happened to change my mood, when this is nothing but my normal fluctuation.
Fortunately, Moodscope also has a great graphic display.
The graphical display of the change in mood over time is very useful – it allows you to zoom in and out, clearly shows the range of mood that has been registered over time, and can be adjusted to show shorter or longer time periods.
An important limitation is that Moodscope – for good reason – does not ask people whether they are thinking about suicide or self-harm. If it did, there would be unpleasant implications. First of all, it places unreasonable strain on the buddies who receive the reports. What if people indicate that they are ready to kill themselves, but this is not reported to the buddies? And if it is reported to the buddies, what if they discover the warning after the fact? However, this sensible limitation means that Moodscope can miss a significant improvement in cases where people feel still low, but no longer suicidal.
The main reason I would keep using Moodscope is the buddy function. It means that I can let others know how I am feeling through the relatively private medium of email. Otherwise, the interface would be too much of an annoyance, as registering one answer can mean up to three clicks, which needs to be done twenty times. There are thousands of applications diligently monitoring all public social media activity. Somehow, mood is a bit more private than that – it’s good that there’s an application which keeps it private.
May 3, 2012
On Twitter, Rebekah Higgit reacted to David Willetts’ speech on Open Access to research results by asking – well, what about arts and humanities?
In this short post, I would like to outline a couple of practices that I am familiar with from linguistics and psychology that do not require publisher action, but that can be implemented today, by any researcher, unless they’ve signed a particularly stifling publication contract.
- Directly linking to copies of their work on their own web page. This is the most basic version.
- Circulating drafts: Plenty of papers do the rounds in draft form before they are even submitted for peer review. This happened to a paper I worked on which ended up being published in the prestigious journal Language. If you look at the Google Scholar citation page, quite a few citations predate the actual publication. These come from circulated previous drafts.
- E-mail request systems: Many groups working in psychology have an ingenious mechanism for getting around publisher’s restrictions. They set up a central email address whereby you can request a paper, and either a bot or a person monitoring that address will send you a version, either a pre-print as submitted to the publisher for typesetting, or the Authors’ Copy of the typeset version.
- Pure online journals: I’m aware of a few well-regarded online journals in the humanities and social sciences, such as The Qualitative Report and Forum: Qualitative Research.
So, there are a number of ways of enabling Open Access now – let’s hope Willetts’ team manages to negotiate a model that allows more genuine open access for all branches of learning.
May 2, 2012
Last week, New Statesman journalist Martha Gill published a column in the New Statesman’s Current Account blog about the way in which our own biases affect how we process information. She based her argument not just on her own observations, but on a scientific study.
Unfortunately the actual results of the study are not nearly as neat as Gill would like them to be.
Before we start, let me be clear: Studies can be distorted at any step from the reseachers’ fertile minds to the page or blog post, with the most common culprit being (misreported) press releases. I don’t know where the paper Gill cites got twisted beyond its results; I merely based my analysis on the way she reported the study.
Gill versus Nyhan/Reifler
A study published in the journal Political Behaviour shows just how reluctant people are to engage with facts that don’t support their world-view.
Notice that the reference is not given; for fact fans, it’s Nyhan, Brendan and Reifler, Jason (2010): When Corrections Fail: The Persistence of Political Misconceptions. Polit Behav (2010) 32:303–330. The paper is freely available online, as far as I can see. Ben Goldacre explains much better than me why this matters: It allows readers to check whether conclusions were reported correctly.
In the experiment, conducted in 2005, participants were given fake news stories.
There were two experiments, one in 2005 and one in 2006, that consisted of a total of four studies. This is very important – we will see later why.
These news stories were embedded with false facts: that tax cuts under the Bush administration increased government revenues, that weapons of mass destruction had been found in Iraq and that Bush had banned stem-cell research (he only limited some government funding).
The first experiment in autumn 2005 looked at correcting people’s impression that Iraq had weapons of mass destruction; the second in spring 2006 repeated the earlier study, and repeated it with two more topics, tax cuts and stem cell research. In the first experiment, the story with the “false facts” came from Associated Press, in the second experiment, half the participants were shown stories that purported to be from the New York Times, a notoriously liberal paper, and half saw exactly the same story, but this time, it supposedly came from Fox News, a notoriously conservative news source.
After each statement, the researchers put in an unambiguous correction – and then tested the participants to see if they picked this up.
The supposedly unambiguous correction was in fact a paragraph in the same story that reported findings from a relatively objective source which contradicted the key statement in the first paragraphs. So what we have here is not statement / counterstatement, but rather a classic “he said/she said” structure, where journalists present both views.
Here’s the original text from Experiment 1, together with the correction.
Wilkes-Barre, PA, October 7, 2004 (AP)—President Bush delivered a hard-hitting speech here today that made his strategy for the remainder of the campaign crystal clear: a rousing, no-retreat defense of the Iraq war. Bush maintained Wednesday that the war in Iraq was the right thing to do and that Iraq stood out as a place where terrorists might get weapons of mass destruction. ‘‘There was a risk, a real risk, that Saddam Hussein would pass weapons or materials or information to terrorist networks, and in the world after September the 11th, that was a risk we could not afford to take,’’ Bush said.
While Bush was making campaign stops in Pennsylvania, the Central Intelligence Agency released a report that concludes that Saddam Hussein did not possess stockpiles of illicit weapons at the time of the U.S. invasion in March 2003, nor was any program to produce them under way at the time. The report, authored by Charles Duelfer, who advises the director of central intelligence on Iraqi weapons, says Saddam made a decision sometime in the 1990s to destroy known stockpiles of chemical weapons. Duelfer also said that inspectors destroyed the nuclear program sometime after 1991.
The President travels to Ohio tomorrow for more campaign stops.
(Nyhan & Reifler, 2010, p. 324f.)
Can you see how easy it is to reframe this as a piece that just presents two different points of view? Your evaluation of the correction will depend largely on your view of the CIA, Big Government, and bureaucrats who write reports.
They didn’t. Participants who identified themselves as liberal ignored the correction on stem-cell regulations and continued to believe Bush had issued a total ban. Conservatives not only ignored the corrections on Iraq and the tax cuts but clung even more tenaciously to the false information. Facts had made things even worse.
Well, what actually happened?
First of all, this was never about people actually changing their opinion; the researchers are clear that this is future work. Instead, this was a between-subjects design. Half the participants read the story with the correction before they answered the question, half read the story without the correction.
The question participants answered was designed to measure shift in opinion:
Question to participants:
Immediately before the U.S. invasion, Iraq had an active weapons of mass
destruction program, the ability to produce these weapons, and large stockpiles of WMD, but Saddam Hussein was able to hide or destroy these weapons right before U.S. forces arrived.
- Strongly disagree 
- Somewhat disagree 
- Neither agree nor disagree 
- Somewhat agree 
- Strongly agree 
(Nyhan & Reifler, 2010, p. 325)
The authors looked at four predictors of opinion: whether participants had seen the correction, what their ideology was (on a scale from liberal to conservative), how much they knew about politics, and whether their ideology affected their reaction to the correction. When they used all of these predictors to model participants’ answer to the question, by far the largest effect was political knowledge. In the first study, they also found a clear effect of ideology. The correction was more likely to work the more liberal participants were; it backfired for conservatives. Conservatives who read the corrected text were more likely to believe that Iraq indeed had Weapons of Mass Destruction than Conservatives who didn’t.
In the second experiment in Spring 2006, where they looked at the Weapons of Mass Destruction issue again, that backfire effect for conservatives was gone. The only way they could replicate it was to look at a small subset of their participants who had stated that Iraq was most important for them, but that’s a post-hoc analysis (in other words, fishing for results). There are many possible explanations for this, but the authors argue that the main reason was the change in public opinion between Autumn 2005 and Spring 2006. So, far from showing that journalists are powerless, the results actually suggest that persistent corrections across a range of media might be far more powerful than a single story in a point/counterpoint scenario.
When the researchers used a text about tax cuts, however, they saw the same “backfire effect” in conservatives that they had observed in the Autumn 2005 study. In effect, strong conservatives who read the text with the correction were more likely to believe the incorrect fact than strong conservatives who didn’t see the correction. When Nyhan and Reifer looked at liberals’ reactions to a text about stem cell effects, there was no such backfire effect; liberals who read the correction were less likely to accept the correction, but they didn’t show an increased conviction that the opposite was true.
What about people who are neither strongly liberal or strongly conservative? Well, in all of the four studies Nyhan and Reifer conducted, the effect of ideology was gradual, i.e. Centrists were far less likely to be influenced by ideology than either people with strong liberal or strong conservative views.
(By the way, all participants in this study were undergraduates at a Catholic university; the researchers say that their study needs to be repeated with a more representative sample of the population.)
So, What Are We To Make Of This?
People’s perceptions can change, but they don’t change based on reading a single contradictory story, even if it comes from supposedly trustworthy sources. (In fact, in the second experiment, the purported source of the story (New York Times vs Fox News) did not make any difference to the results at all).
What we need to do is understand why people persist in their beliefs despite contrary evidence. In particular, we need to look at
- how much people already know (which explained a lot of the variation in the data)
- how persistent their beliefs are
- what motivates their reasoning
So what can journalists do? Quite a bit, as it turns out.
First of all, once a belief has been formed, it tends to persevere. So, those journalists and news media who break stories have a unique opportunity to affect people’s beliefs about the issues they report, because first impressions are likely to stick (c.f. Ross and Lepper’s (1980) work on belief perseverance, cited after Nyhan & Reifler 2010).
Secondly, and most importantly, if misinformations are persisting, keep correcting them, and seek as many allies as possible to spread the correct information. People who are ‘‘confronted with information of sufﬁcient quantity or clarity… should eventually acquiesce to a preference-inconsistent conclusion.’’ (Ditto & Lopez, 1992, p. 570).
So if you are a journalist or an activist, and Gill’s column discouraged you, take heart. Change happens, but it takes patience and persistence. Keep going!
Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: use of differential decision criteria for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology, 63(4), 568–584.
Ross, L., & Lepper, M. R. (1980). The perserverance of beliefs: Empirical and normative considerations. In R. A. Shweder (Ed.), Fallible judgment in behavioral research: New directions for methodology of social and behavioral science (Vol. 4, pp. 17–36). San Francisco: Jossey-Bass.55.951679 -3.198398
April 19, 2012
Both Jo Brodie and Alice Bell have questioned the wider implications of the “Academic Spring” of open access research papers. Providing easy access to research papers is all well and good, but what about helping people make sense of them?
Jo raises an issue that is particularly important, patients and informal carers researching their own condition on the internet. How are they going to cope with medical journal articles, which are written to tight word counts using rigid structures and highly technical vocabulary? Even doctors need training in those skills!
Alice and Jo suggest that journal papers themselves should be written in a more accessible style. While that would be ideal, there is a fundamental problem with this from a communication theoretic perspective. Text structures, conventions of language use, and technical vocabulary emerge because they allow people who share similar background knowledge to communicate efficiently. Sometimes, the most efficient code is graphical; sometimes, it is mathematical.
To be clear, I am not defending badly written papers. Good, clear, engaging writing is still very important, as anybody who has reviewed a badly written paper at 10pm in a desperate bid to make a review deadline will be able to attest. Good writers use technical terms to make their point succinctly, to indicate their theoretical or methodological background, and to link into a wider body of work on that area.
While it is often possible to explain relevant concepts to lay people, this requires changes to the structure of the text, to ensure concepts are properly defined and anchored before they are used. Another problem is making a judgement call about the level of detail that needs to be explained. Let’s take significance testing. How do we communicate what a p-value of 0.05 actually means in practice? And are p-values the right thing to report? What other background should be given?
Fortunately, there is a way out of this conundrum which should be familiar to most researchers who have written or reported on a research grant – the lay summary. Lay summaries are not an easy way out. In order to write a good summary, you need to boil down your research to its essential goal and its main results. This requires a deep understanding of one’s own results. What have we done? What does it actually mean? What more do we need to know?
Traditionally, abstracts and keywords were deemed to be enough to help readers situate the research reported in a paper. Many journals go even further than this. Elsevier journals encourage graphical abstracts and one-sentence research highlights. Many medical journals require a box that summarises the main contribution of the paper to the state-of-the-art; some even use prompts to help authors structure such a text. Some journals have (or had) English abstracts translated into Spanish, French, or German, to help researchers for whom English is not their native language.
From these traditions and innovations, it is but a small step to short summaries written specifically for a non-specialist audience. Funders have shown the way in developing a standard structure, issuing writing guidelines, and providing examples. Unfortunately, what is still missing is a culture of good editing, which costs time and money, but is crucial, especially for lay summaries. The SPARC initiative led the way here, with professionally crafted texts (here’s an example of a project on auditory reminders that I led), but so far, none of the funders I have written summaries for have followed that shining example.
But who then helps non-specialist readers to critically assess what they are reading? Here is where science blogging comes into its own and where science communicators like Ben Goldacre, Petra Boynton, Ed Yong and Kate Clancy shine, to name but a tiny handful. Bewildered by the sheer variety of science blogs? Try Science Seeker or Research Blogging, from there, go to the big and small networks. By now, there is a critical mass of science bloggers who have been honing their craft for years. Add to that well-crafted lay summaries, and we might be onto a science communication winner.
Edited to add (April 20): Stuart Cantrill alerted me to the fact that graphical abstracts have a long history in chemistry. Here is a link to an editorial on graphical abstracts that he has written for Nature Chemistry (requires registration).