New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013...

26
New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble

Transcript of New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013...

Page 1: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

New models for evaluation for research and researchers

Beyond the PDF 2 Panel19-20 March 2013

Carole Goble

Page 2: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Research value

Gate-keep

Rank

Impact

Why evaluate proposed research?

Novel?

Valid & reliable?

Useful?

Page 3: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Defend

Review, Test, Verify

Transfer

Contribution

Why evaluate published research?

Repeatable?

Reproducible?

Novel?

Reusable?

Good?

Comparable?

Page 4: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

47 of 53 “landmark” publications could not be replicated

Inadequate cell lines and animal models

Nature, 483, 2012

http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

Preparing for & Supporting

Reproducibility is HARD

“Blue Collar” BurdenConstraints

Stealth & Graft

Page 5: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

José Enrique Ruiz (IAA-CSIC)

Galaxy Luminosity Profiling

35 different kinds of annotations5 Main Workflows, 14 Nested Workflows, 25 Scripts, 11 Configuration files, 10 Software dependencies, 1 Web Service, Dataset: 90 galaxies observed in 3 bands

Page 6: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Contribution Activity – Review CorrelationAccountability & Coercion

Page 7: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

What are we

evaluating?

• Article?

• Interpretation? Argument? Ideas?

• Instrument? Cell lines? Antibodies?

• Data? Software? Method?

• Metadata?

• Access? Availability?

• Blog? Review?

• Citizenship?

1 0 JA N UA RY 2 0 1 3 | VO L 4 9 3 | N AT U R E | 1 5 9

Page 8: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

• Recognise contribution– To the whole of scholarly life

• Track/Measure Quality & Impact– Ideas, Results, Funds, Value for

money, Dissemination.

• Discriminate: Rank and filter– Individual, Institution, Country

• Country Club -> Sweatshop

Why evaluate researchers?R

epu

tation

Productivity

Page 9: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

How do we evaluate?

Peer review

Best effort Re-produce/Re-peat/Re-* Rigour

Popularity contests

Rigour vs Relevance

Page 10: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Panelists + RolesCarole Goble (Manchester): Chair

Steve Pettifer (Utopia): Academic, S/w innovator

Scott Edmunds (GigaScience): Publisher

Jan Reichelt (Mendeley): New Scholarship vendor

Christine Borgman (UCLA) Digital librarian/Scholar

Victoria Stodden (Columbia): Policymaker, funder

Phil Bourne (PLoS, UCSD): Institution deans

All are researchers and reviewers

Page 11: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Disclaimer

The views presented may not be those genuinely held by the person espousing them.

Page 12: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Panel Question

What evaluation means to you

What evaluation would be effective and fair

What responsibility do you bear?

Page 13: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Notes

We didn’t have to use any of the following slides as the audience asked all the questions or the chair promoted.

Page 14: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Reproduce mandate

Infrastructure

Another panelist

Qualitative and Quantitative

Faculty promotion

Right time

Convince policy makers

Who

Johan Bollen

$10KChallenge

Open Solves It

Conflicting Evaluation

Page 15: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

A Funding Council / Top Journal decrees (without additional resources) all research objects published must be “reproducible”.

How? Is it possible? Necessary? How do we “evaluate” reproducibility?

Preparing data sets. Time.

Wet science, Observation Science, Computational (Data) Science, Social Science.

Page 16: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

In a new promotion review, researchers have to show that at least one of their research objects has been used by someone else.

Maybe cited. Preferably Used.

How will you help?

Page 17: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Do we have the technical infrastructure to reproduce research?

Is research platform linked to communication platform?

Or the incentives?

Page 18: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

What is the one thing someone else on the panel could do to support a new model of evaluation?

And the one thing they should stop doing?

Page 19: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Should research be evaluated on rigour, reproducibility, discoverability or popularity?

Qualitative and Quantitative

Page 20: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

When is the right time to evaluate research?

during execution?peer review time?5 years later?

Should we bother to evaluate “grey” scholarship?

Page 21: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

• What will convince the policy makers / funders / publishers to widen focus from impact factor to other researcher metrics? Other scholarly units.

• How will the digital librarian / academic convince them?

Page 22: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Who should evaluate research?

And who should not?

Page 23: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

• Johan Bollen, Indiana University, suggests in a new study of NSF funded research that we might as well abandon grant peer evaluation and just give everyone a budget with the provision that recipients must contribute some of their budget to someone they nominate.

• Why don’t we do that?

Page 24: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

If you had $10K what would you spend it on?

Page 25: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Make Everything Open.

That solves evaluation, Right?

Page 26: New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble.

Joined up evaluation

across the scholarly lifecycle?

or Conflict?

Strategy vs Operation