September 18, 2010
I made the mistake of reading the comment thread of Ben Goldacre’s recent Grauniad column – I was scanning for David Colquhoun’s contribution – when I happened on this gem, posted by munci76 @ 4.17pm, 18/9/10
I used to work for a medical comms agency and medical writing was a big part of our business (I believe it still is).
I think you’d be surprised to learn that pretty much all papers in scientific journals are substantially written by agency staff, and as Ben says, the named authors only see the articles when they are at the latter stages. They usually get two edits.
We’d also write all the posters and powerpoint presentations for leading doctors (or Key Opinion Leaders, as we called them) when they presented at symposia at medical conferences. These were clearly sponsored by pharma companies, so it was more upfront.
And they do get paid. We called them ‘honoriaria’. They were cheques paid to the authors’ departments, rather than to them directly. A symposium appearance usually paid about a grand in my experience, though this would obviously vary.
I’m not sure what the payment arrangements were for journal articles, I didn’t really get so involved in that side of things.
There are two fallacies in this comment which riled me sufficiently to reactivate my Grauniad login. This is an expanded version of the reply I left there.
The first fallacy is something that irks me about a lot of the discussion of “science”, and that is equating “science” with the tiny little corner of knowledge procurement that oneself specialises in. Munci76 probably didn’t want to smear all of computer science, geology, palaeontology, zoology (insert your favourite science here), but s/he might well be quote mined that way.
The second fallacy is assuming that the whole is like the parts. If YOU and YOUR agency get plenty of business writing ghosted articles for SOME of the THOUSANDS of medical journals out there, this does NOT mean that ALL people do it.
I would even bet that there are many, many subfields of medical science out there where what Munci76 describes is the exception rather than the norm. For example, in the field of speech therapy, which I am somewhat familiar with, there are no pills, and there’s no money to be made there. Sure, there are people who promote certain analysis methods or therapy approaches, but firstly, people in the field know who they are and what their biases are likely to be, and secondly, they have graduate students and postdocs who do the writing for them.
Don’t get me wrong – I don’t doubt that the practices Munci76 describes exist. And I don’t doubt that they are very expressly frowned upon by the International Committee of Medical Journal Editors or the IEEE, the main Engineering Society. Some journals explicitly require all authors to sign a statement that they have read the paper they are coauthors on, agree with the content, and fulfil the criteria of authorship. Such precautions and guidelines are an explicit reaction to practices alluded to by e.g. @schroedinger99 on Twitter of senior scientists putting their names on papers that they have never read.
For what it’s worth, the post that is described a bit further down in the comments on Ben’s article, that of a “medical writer with authorship”, is probably a reaction to the ICMJE guidelines – this would be somebody to help with the hard graft of paper writing and production, who now by convention needs to be named as an author.
What I DO doubt is that paying academics to front ghosted papers is pervasive in all of medical writing, much less in all of science writing.
Not all of medicine is pharma, not all pharma is corrupt, not all therapy is pills, and not all science is biomedical.
September 13, 2010
There have been several discussions in the blogosphere that I’ve been itching to contribute to from my perspective as a researcher into human-computer interaction. I’ll start with something that’s immediately relevant to papers I am currently writing, the discussion about upstream and downstream science communication.
Alice Bell recently argued that
scientists should engage with the public while doing science, not after the results are in and have been written up. She drew some critical comments. Most of her commenters argued that it makes sense for scientists to determine their own research programme, and that it would be wrong to have our research agendas dictated entirely by the public. This was echoed by other bloggers such as Neuromancy.
Now, I don’t think upstream science communication is a Good Thing for All Science. (This, incidentally, is something that deeply irritates me about the corners of the Science blogosphere that I lurk in – the notion that Science is this Monolith with One True Method to Rule Them All. But I digress – that would be another post.) But it is vital for the part of computer science that I specialise in, Human-Computer Interaction.
In Human-Computer Interaction, we’re at a fascinating crossroads between basic research, which gives us the tools to build better interfaces, and working with users of computer systems, who are uniquely placed to show and tell us what is going wrong with the interfaces they have. A large part of research in Human-Computer Interaction revolves precisely around upstream user engagement. A lady in her Seventies with a hearing aid tells me that she’d like to get a call when she’s forgot to take her medication? That means I need to find a way to design speech messages that are easy to understand for people who are hard of hearing, even when those messages are transmitted over the worst that BT landlines have to offer. (And, as many of you will know, that can be really, truly, bad.)
So now I’ve engaged with end users to define part of my research agenda. (The other part is defined based on the science required to address the problem, with some basic research snuck in at the back door. Yes, one has to be crafty like that, especially in the current economic climate.) That’s me done with Joe Public until dissemination, right?
Not so fast.
The next step could testing a few algorithms and message designs to see what works best. And who better to test it on than – you guessed it – the potential end users, older people. After they’ve tried some of our designs, we are going to ask them about their views of spoken reminders. Do they like them, if yes, why, if no, why not, what should they sound like? Basically, having experienced what state-of-the-art technology sounds like, people are then invited to comment on it. This is science communication “mid-stream” if you like, but restricted to those members of the public who volunteered to help with the study.
Or the next step could be designing a questionnaire, such as the one available via the MultiMemoHome web site, with plenty of open questions where people can share their views. We are currently writing up a snapshot of these results for publication, but the questionnaires are still open, as is the feedback form. The results of the questionnaire (and other user studies we’ve been doing) will feed right into the design and analysis of the more formal experiments we are preparing to do this year. Again, this is “mid-stream” communication, but this time with a hopefully wider reach.
Of course, the formal experiments will also include a qualitative component consisting of short post-study interviews that will be transcribed and formally analysed.
And on and on it goes, in an iterative cycle where lay feedback informs and shapes – but not completely determines – what Jane Researcher does next.
Until we’re done with the project, and the next cycle begins.