Handling the Literature Prof Carole Goble [email protected] COMP80122 28 January 2015.
New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013...
-
Upload
douglas-stevenson -
Category
Documents
-
view
217 -
download
2
Transcript of New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013...
New models for evaluation for research and researchers
Beyond the PDF 2 Panel19-20 March 2013
Carole Goble
Research value
Gate-keep
Rank
Impact
Why evaluate proposed research?
Novel?
Valid & reliable?
Useful?
Defend
Review, Test, Verify
Transfer
Contribution
Why evaluate published research?
Repeatable?
Reproducible?
Novel?
Reusable?
Good?
Comparable?
47 of 53 “landmark” publications could not be replicated
Inadequate cell lines and animal models
Nature, 483, 2012
http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328
Preparing for & Supporting
Reproducibility is HARD
“Blue Collar” BurdenConstraints
Stealth & Graft
José Enrique Ruiz (IAA-CSIC)
Galaxy Luminosity Profiling
35 different kinds of annotations5 Main Workflows, 14 Nested Workflows, 25 Scripts, 11 Configuration files, 10 Software dependencies, 1 Web Service, Dataset: 90 galaxies observed in 3 bands
Contribution Activity – Review CorrelationAccountability & Coercion
What are we
evaluating?
• Article?
• Interpretation? Argument? Ideas?
• Instrument? Cell lines? Antibodies?
• Data? Software? Method?
• Metadata?
• Access? Availability?
• Blog? Review?
• Citizenship?
1 0 JA N UA RY 2 0 1 3 | VO L 4 9 3 | N AT U R E | 1 5 9
• Recognise contribution– To the whole of scholarly life
• Track/Measure Quality & Impact– Ideas, Results, Funds, Value for
money, Dissemination.
• Discriminate: Rank and filter– Individual, Institution, Country
• Country Club -> Sweatshop
Why evaluate researchers?R
epu
tation
Productivity
How do we evaluate?
Peer review
Best effort Re-produce/Re-peat/Re-* Rigour
Popularity contests
Rigour vs Relevance
Panelists + RolesCarole Goble (Manchester): Chair
Steve Pettifer (Utopia): Academic, S/w innovator
Scott Edmunds (GigaScience): Publisher
Jan Reichelt (Mendeley): New Scholarship vendor
Christine Borgman (UCLA) Digital librarian/Scholar
Victoria Stodden (Columbia): Policymaker, funder
Phil Bourne (PLoS, UCSD): Institution deans
All are researchers and reviewers
Disclaimer
The views presented may not be those genuinely held by the person espousing them.
Panel Question
What evaluation means to you
What evaluation would be effective and fair
What responsibility do you bear?
Notes
We didn’t have to use any of the following slides as the audience asked all the questions or the chair promoted.
Reproduce mandate
Infrastructure
Another panelist
Qualitative and Quantitative
Faculty promotion
Right time
Convince policy makers
Who
Johan Bollen
$10KChallenge
Open Solves It
Conflicting Evaluation
A Funding Council / Top Journal decrees (without additional resources) all research objects published must be “reproducible”.
How? Is it possible? Necessary? How do we “evaluate” reproducibility?
Preparing data sets. Time.
Wet science, Observation Science, Computational (Data) Science, Social Science.
In a new promotion review, researchers have to show that at least one of their research objects has been used by someone else.
Maybe cited. Preferably Used.
How will you help?
Do we have the technical infrastructure to reproduce research?
Is research platform linked to communication platform?
Or the incentives?
What is the one thing someone else on the panel could do to support a new model of evaluation?
And the one thing they should stop doing?
Should research be evaluated on rigour, reproducibility, discoverability or popularity?
Qualitative and Quantitative
When is the right time to evaluate research?
during execution?peer review time?5 years later?
Should we bother to evaluate “grey” scholarship?
• What will convince the policy makers / funders / publishers to widen focus from impact factor to other researcher metrics? Other scholarly units.
• How will the digital librarian / academic convince them?
Who should evaluate research?
And who should not?
• Johan Bollen, Indiana University, suggests in a new study of NSF funded research that we might as well abandon grant peer evaluation and just give everyone a budget with the provision that recipients must contribute some of their budget to someone they nominate.
• Why don’t we do that?
If you had $10K what would you spend it on?
Make Everything Open.
That solves evaluation, Right?
Joined up evaluation
across the scholarly lifecycle?
or Conflict?
Strategy vs Operation