Replicant II – Tears in the rain · Replicant II – Tears in the rain Mole I’ve seen things...

3
Journal of Cell Science STICKY WICKET Replicant II – Tears in the rain Mole I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time… like tears in rain… And then Blatty says, ‘‘Time to die’’ and he dies. In the pouring rain. Oh man. For those of you just joining us, I’ve been watching Blade Runner, again. And we’ve been talking about replicants. Not the artificial humans in the movie, but published results that can’t be replicated. Replicants. Get it? There is a growing perception that many, or even most of the claims made in the literature cannot be replicated. While this isn’t a new problem (the Greeks complained about this I think, or they should have I mean, putting golden crowns in bathtubs isn’t something you can try to repeat in your hut), the tried and true approach of letting the community work things out over time just isn’t fast enough in our modern, I-need-it- yesterday world. We need to know, up front, what is worth Correspondence for Mole and his friends can be sent to [email protected], and may be published in forthcoming issues. An occasional column in which Mole and other characters share their views on various aspects of life-science research. ß 2014. Published by The Company of Biologists Ltd | Journal of Cell Science (2014) 127, 2123–2125 doi:10.1242/jcs.155598 2123

Transcript of Replicant II – Tears in the rain · Replicant II – Tears in the rain Mole I’ve seen things...

Page 1: Replicant II – Tears in the rain · Replicant II – Tears in the rain Mole I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched

Jour

nal o

f Cel

l Sci

ence

STICKY WICKET

Replicant II – Tears in the rain

Mole

I’ve seen things you people wouldn’t believe. Attack ships on fire

off the shoulder of Orion. I watched C-beams glitter in the dark

near the Tannhauser gate. All those moments will be lost in

time… like tears in rain…

And then Blatty says, ‘‘Time to die’’ and he dies. In the pouring

rain. Oh man. For those of you just joining us, I’ve been watching

Blade Runner, again. And we’ve been talking about replicants.

Not the artificial humans in the movie, but published results that

can’t be replicated. Replicants. Get it?

There is a growing perception that many, or even most of the

claims made in the literature cannot be replicated. While this

isn’t a new problem (the Greeks complained about this I think,

or they should have – I mean, putting golden crowns in

bathtubs isn’t something you can try to repeat in your hut), the

tried and true approach of letting the community work things

out over time just isn’t fast enough in our modern, I-need-it-

yesterday world. We need to know, up front, what is worthCorrespondence for Mole and his friends can be sent to [email protected],and may be published in forthcoming issues.

An occasional column in which Mole and other characters share their views on various aspects of life-science research.

� 2014. Published by The Company of Biologists Ltd | Journal of Cell Science (2014) 127, 2123–2125 doi:10.1242/jcs.155598

2123

Page 2: Replicant II – Tears in the rain · Replicant II – Tears in the rain Mole I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched

Jour

nal o

f Cel

l Sci

ence

putting our time into and what isn’t, so that we can moveforward, and now.

Very serious people have been thinking about this seriously.Seriously they have. Does that sound a bit frivolous? Perhaps itshould. In general, whenever I’ve noticed an old problem (such asthis) suddenly coming to the fore as urgent, there is a motivation

that boils down to cold, hard cash behind it. And that’s exactlywhat’s going on here: as our governments embrace austerity (sothat we’ll have enough money to bail out the ultra-rich the next

time they fumble the ball, again) the amount of money availablefor research is shrinking (or at least not growing). So of course wepoint out how, well, stupid this is, since modern, first world

economies are largely fueled by scientific discoveries. This leadsto push-back, noting that, well, no, modern, first world economiesare largely fueled by moving money (in the form of electrons)

around, and besides, the ‘science’ we do doesn’t lead todiscoveries anyway, because the findings are mostly replicants.

Maybe I’m being paranoid. (That said, having grown up in thedays of hippie-dom, I am reminded that just because you’re

paranoid doesn’t mean that they’re not out to get you. Illegitimi

non carborundum!). But it may be a useful exercise to examinethe quite serious recommendations of these serious folks, as some

of them are pretty reasonable. But as we will see, some are not.Let’s look at them one by one.

1. We need to train our people better. See, right off the bat I

completely agree with this. Once upon a time, some, or evenmost, of the students who entered a Ph.D. program did not comeout the other side as Ph.Ds. And once upon a time this was

viewed as a good thing, because there weren’t very many jobs forPh.D. scientists and those who were allowed to continue werethose deemed most likely to succeed. But there are a lot ofreasons why this practice hasn’t continued. One of them is that

programs competed for money and a metric was needed todetermine which ones were most successful, and of course thesimplest metric was the number of entering students who

completed the program. So naturally everyone had to succeed.But more than that, the whole reason to have students was to havehands in the lab, and spending time training and testing students

is now considered more or less a waste of invaluable bench time.An important reason for this comes next. Personally, I think it isessential to drill into trainees the idea that what they publish hasto stand the tests of time if they want to ultimately be successful.

If they have doubts, the work should not be published. Yeh, Iknow, good luck with that. So while I agree that we need to trainpeople much more rigorously, the system itself would have to

change. This is because:2. We need to remove the rewards for publishing results

regardless of their validity. OMG, I’m agreeing again! That almost

never happens. Yes, it would be lovely if we could find a way to dojust that, reward work that is validated and moves the field forward,and not just anything that makes its way into the literature. But how

do we do this? The approach we have generally taken is to use ametric: How often has a finding been cited? If it is cited a lot, it isimportant. And if it has been cited a lot, it may even be a‘landmark’. As we saw last time, though, many ‘landmark’

findings of this sort apparently cannot be replicated – they arereplicants – and this invalidates the entire approach. So we shouldhold off on the decision of the validity of a finding until it has been

deemed useful, right? And how do we do this? One idea is:3. We need to make it is easier to publish negative results.

Holey Moley! Again I agree! This could turn out to be a record.

Well, I agree for the most part. There are really two sorts of

negative result. The first sort is a controlled negative result. Itmay not be absolutely, positively, negative, but it is a result that

convincingly shows that the conclusions of a study are not validlysupported by another study. We really should know about suchresults. For example, a clinical trial is conducted that shows that aparticular drug has no efficacy in a patient population – often a

company that conducts such a study feels no obligation to makethis finding public (I suppose it could hurt their stock value, orthey might be afraid that it would). It would be pretty easy,

though, for regulatory agencies to insist that results of all trials bemade public as a condition of filing with the agency in the firstplace, and this would be invaluable for other researchers

(corporate or academic) in furthering the efforts. Or as anotherexample: attempts by another lab to rigorously test theconclusions of a study produce clear-cut results that cast doubt

on the validity of a result. Generally, journals find such studies,however rigorous, not very interesting (indeed, the journals thatpublished the original finding generally feel that to publish thecounter study would cast them in a bad light, and often demur).

Sure, we can say that the journal is obligated to publish solidevidence that another paper’s conclusions in the same journalmay be wrong, but journals don’t actually have to do anything we

say. So, in general, negative results are either not published, orare relegated to much lower impact journals. But there really issomething we can do about this: cite them. The more we cite

counter-examples in our own papers (even focusing our citationon the one we think is the right one), the value of publishingnegative results will rise, and such increased citation will increase

the demand for such publications. So rather than saying that Beeet al. say that it’s so but Wasp et al. say it isn’t, we could just saythat Wasp et al. tested an idea but found it lacking (why give Beeet al. credit for saying something we think is wrong?). All useful

to think about. But as serious minds have noted, we just don’thave time to check out every little thing before we proceed todevelop the research further. And we certainly cannot hold up

promotions and grants and publications (sorry, stop, reverse that,as Willy Wonka would say) awaiting validation of importantresults. While it is a lovely idea to untether decisions regarding

scientific achievement and its rewards from the status of thepublication, we have no such mechanism. So, the pundits suggest:

4. We need to devise ways to punish publication of invalid

results. Oh boy. I can see how this might be appealing to some.

Currently, the system rewards publication itself, with no penaltyfor publishing something that is simply wrong. If, on the otherhand, publishing something that others cannot replicate carried

with it certain penalties (loss of funding, loss of prestige, beingforced to wear a big loser ‘L’ on your head), we would makesuper-sure that what we published was right, right, right. Right?

To some extent, many of us already link our personal value asscientists to the notion that what we report is correct, and we’ll doanything to show others that we have it right. Whole careers have

been built on working to support an idea that takes years toconvince the community of its validity. When Darwin reportedhis observations that supported the concept of natural selection,he was assailed with counter-examples that might discredit the

idea. Arguably (and this is an argument made, not by me, but byStephen J. Gould), Darwin spent the rest of his career building thecase for natural selection as the basis for the diversity of life. So, I

put it to you, esteemed reader (sorry, when I think of such things,I go all nineteenth century), should we have cut Darwin off at thefirst sign that his idea might not be correct? Yeh, that would have

been bitchin’. (Okay, back to the twenty-first century – way

STICKY WICKET Journal of Cell Science (2014) 127, 2123–2125 doi:10.1242/jcs.155598

2124

Page 3: Replicant II – Tears in the rain · Replicant II – Tears in the rain Mole I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched

Jour

nal o

f Cel

l Sci

ence

better.) Clearly, the problem here is this: who gets to decide thatsomething is correct, or not? Which leads to:

5. Take this out of the hands of the scientific community, who

routinely publish replicants. Now we’re getting seriously serious.Indeed, this has not only been suggested, but pushed – there shouldbe impartial groups that have the mandate to test important findings

to determine if they can be replicated. Perhaps these can even becompanies that do this for a living. Surprise! At least one suchcompany already exists – and bigger surprise!! – the company’s CEO

is one of the people pushing for such validation as a requirement forobtaining future funds. How could anything go wrong? I really,really hope that you find this idea as utterly awful as I do. If you

don’t, let’s talk about it more. If you do, well, stick with me anyway,because you already agree and I need all the help I can get.

Fortunately, a proposed plan to require independent validationof all preliminary results as a prerequisite for government

support of research died an early death. But the idea persists.And this is, in part, because we just don’t know what to do. Hey,this is Mole here. Of course, I have some more ideas. But myreason for wanting to do something is not based on the need to

eradicate the literature of things I don’t agree with. It’s based onwhat Blatty said. Because if we cannot publish what we haveseen, even if someone else hasn’t seen the same thing, then

something that may be genuinely valuable may simply be gone.Lost in time. Like tears in the rain. And I don’t want to see thathappen.

Oh, now I’ve gotten all weepy again. I’m going to go watchWho Framed Roger Rabbit.

STICKY WICKET Journal of Cell Science (2014) 127, 2123–2125 doi:10.1242/jcs.155598

2125