Alternative metrics for book impact assessment

24
Alternative Metrics for Book Impact Assessment: Can Choice Reviews be a Useful Source? Kayvan Kousha and Mike Thelwall Statistical Cybermetrics Research Group School of Mathematics and Computing University of Wolverhampton

Transcript of Alternative metrics for book impact assessment

Page 1: Alternative metrics for book impact assessment

Alternative Metrics for Book Impact Assessment: Can Choice Reviews be

a Useful Source?

Kayvan Kousha and Mike ThelwallStatistical Cybermetrics Research GroupSchool of Mathematics and ComputingUniversity of Wolverhampton

Page 2: Alternative metrics for book impact assessment

Citation Metrics • Web of Science (WoS) and Scopus: cited reference searches

– Problem: time-consuming and do not include many citations from books to books

• Book Citation Index (BKCI) and Scopus: BKCI: about 60,000; Scopus: about 100,000 books and monographs.– Problem: mostly English books, from the US and the UK publishers and

the lack of aggregated citation counts for edited volumes (Leydesdorff & Felt, 2012; Gorraiz, Purnell, & Glänzel, 2013; Torres-Salinas et al., 2014)

• Google Books (GB): GB is not a citation index but an automatic method can be used to capture book citations (Kousha & Thelwall, 2014). GB citations are more plentiful than citation databases in the humanities and in some social sciences. – Problem: overall 95% accuracy and coverage

Page 3: Alternative metrics for book impact assessment

Non-Citation Metrics• Expert peer review: seems to be the best method (e.g., REF).

– Problem: might be time-consuming and expensive (Ref example) , more subjective than articles, and more difficult to assess teaching or cultural impact of books.

• Libcitations: library holdings statistics as an indication of usage, interest or cultural impact (Torres-Salinas & Moed, 2009; White, Boell, Yu et al., 2009; Zuccala & Guns, 2013). – Problem: Automatic searching is not available without permission.

• Publisher Prestige: reputational surveys (Torres-Salinas et al., 2012) and other citation indicators (Zuccala, Guns, Cornacchia & Bod, 2014), used to rank academic book publishers. – Problem: May not be used directly for the individual book impact assessment,

time consuming to cover all publishers. • Teaching impact: academic syllabus mention for textbooks or teaching monographs.

An automatic method can capture citations in online academic syllabi (Kousha & Thelwall, 2015).– Problem: syllabus mention is limited to online syllabi from university websites.

Page 4: Alternative metrics for book impact assessment

Books Reviews• Evidence: the number of reviews in the Book Review Index and the

number of library holdings correlate (r=0.620) for 200 novels (Shaw, 1991).

• Evidence: Sociology monographs (n=420) with positive reviews attract considerably more citations than do monographs with negative reviews (Nicolaisen, 2002)

• Evidence: low but statistically significant correlations between Amazon metrics and both citation metrics (BKCI and GB) and libcitations (n= 2,739 academic monographs (Kousha & Thelwall, 2015)

• Evidence: A weak but significant correlation between Goodreads ratings, citation and library holdings (n=8,538) (Zuccala et al. 2015)

• Summary: Books and monographs with more citations or library holdings tend to receive more reviews, higher ratings, better and more positive reviews.

Page 5: Alternative metrics for book impact assessment

Research questions• Can academic reviews in Choice: Current Reviews for

Academic Libraries be systematically used for indicators of scholarly impact or educational value for scholarly books.

• Do Choice book ratings correlate with citation metrics or with other non-citation metrics for books?

• Do books recommended for undergraduates have more syllabus mentions than books recommended for researchers?

Page 6: Alternative metrics for book impact assessment

Method (1)• Choice has published reviews of academic books by editors, experts and librarians

in different fields for about 50 years, 7,000 reviews each year.• The recommendations for 451 book reviews extracted from a free sample of

Choice Reviews Online automatically and were converted into a number, from 1 for ‘Not recommended’ to 5 for ‘Essential’.

• Essential: A publication of exceptional quality for academic audiences and a core title for academic libraries supporting programs in relevant disciplines.

• Highly recommended: A publication of high quality and relevance for academic audiences.

• Recommended: A publication containing good content and coverage and suitable for academic audiences.

• Optional: A publication that, due to limited value or deficiencies, is marginal for academic audiences.

• Not recommended: A poor quality publication or one not suitable for academic audiences.

Page 7: Alternative metrics for book impact assessment
Page 8: Alternative metrics for book impact assessment

Method (2)• We also used extra information in reviews about usefulness for

different academic audiences of books, such as undergraduates, researchers, faculty members and, professionals.

Page 9: Alternative metrics for book impact assessment

Automatic Capturing of Reviews(with permission of Choice)

Page 10: Alternative metrics for book impact assessment

Automatic Capturing of Reviews(with permission of Choice)

Page 11: Alternative metrics for book impact assessment

An example

Empire of Humanity: A History of Humanitarianism by Michael Barnett

Page 12: Alternative metrics for book impact assessment
Page 13: Alternative metrics for book impact assessment

Google Books Citations (GB)• GB the largest database of digitised books and GB API citations were used to

extract citations from books to books

Kousha, K. & Thelwall, M. (2014). An automatic method for extracting citations from Google Books. Journal of the Association for Information Science and Technology.

Manual search possible but gives many false results.Automatic method was developed and tested using Google Books API in Webometric Analyst.

Page 14: Alternative metrics for book impact assessment

Academic syllabus mentions• The teaching value of the 451 books were assessed in online academic

course syllabi.

• Kousha, K. & Thelwall, M. (2015). An Automatic Method for Assessing the Teaching Impact of Books from Online Academic Syllabi. Journal of the Association for Information Science and Technology.

Automatic method was developed and tested using Bing API in Webometric Analyst to identify academic syllabus mentions

Page 15: Alternative metrics for book impact assessment

72,000 libraries in 170 countries

Page 16: Alternative metrics for book impact assessment

Amazon.com Reviews• The numbers of customer reviews were automatically

extracted from the main Amazon.com URLs for each of the 451 books

Kousha, K. & Thelwall, M. (2015). Can Amazon.com reviews help to assess the wider impacts of books?  Journal of the Association for Information Science and Technology.

Page 17: Alternative metrics for book impact assessment

Mendeley bookmarks

• Mendeley API in Webometric Analyst • But Mendeley were only available for a

minority of the books, as confirmed in two other studies for books (5%-7%)

• Hammarfelt, B. (2014). Using altmetrics for assessing research impact in the humanities. Scientometrics.

• Kousha, K. & Thelwall, M. (in press 2015). Can Amazon.com reviews help to assess the wider impacts of books? Journal of the Association for Information Science and Technology.

Page 18: Alternative metrics for book impact assessment

Results (1)• About three-quarters of books with Choice reviews had at least one GB

citation (Table 2), and this is higher in the social sciences (80%, median: 3) than in science (68%, median: 2).

Page 19: Alternative metrics for book impact assessment

Results (2) Highly rated books (essential and highly recommended) in Choice received more GB citations

Page 20: Alternative metrics for book impact assessment

Results (3)- All• Highly recommended books for teaching in Choice (for undergraduates)

received more academic syllabus mentions.• Highly recommended books for professionals in Choice received more GB

citation .

Page 21: Alternative metrics for book impact assessment

Results (4)

Page 22: Alternative metrics for book impact assessment

Results (5)- by fields

Page 23: Alternative metrics for book impact assessment

Conclusions• RQ1: books that were highly rated in Choice received more GB citations,

academic syllabus mentions, libcitations and Amazon reviews than did lower rated books.

• RQ2: books recommended for undergraduates (e.g., textbooks) received more academic syllabus mentions, and books recommended for researchers and professionals received more citations than did books recommended for undergraduates, suggesting that Choice reviews distinguish between the different audiences for books.

• RQ3: The low (but significant) Spearman correlations between Choice ratings and citation and non-citation metrics suggest that Choice reviews are either somewhat subjective, or (more likely) do not reflect exactly the same aspects of the value of a book (e.g., teaching, research, cultural or social impacts) as any of the other indicators.

Page 24: Alternative metrics for book impact assessment

References• Gorraiz, J., Purnell, P. J., & Glänzel, W. (2013). Opportunities for and limitations of the book citation index. Journal of the

American Society for Information Science and Technology, 64(7), 1388-1398.• Kousha, K., & Thelwall, M. (2014). An automatic method for extracting citations from Google Books. Journal of the Association for

Information Science and Technology. doi: 10.1002/asi.23170. http://www.scit.wlv.ac.uk/~cm1993/papers/AutomaticGoogleBooksCitationsPreprint.pdf

• Kousha, K., & Thelwall, M. (in press). Can Amazon.com reviews help to assess the wider impacts of books? Journal of the Association for Information Science and Technology. http://www.koosha.tripod.com/AmazonReviewstoAssessBooks-Preprint.pdf

• Kousha, K., Thelwall, M., & Rezaie, S. (2011). Assessing the citation impact of books: The role of Google Books, Google Scholar, and Scopus. Journal of the American Society for Information Science and Technology, 62(11), 2147-2164.

• Leydesdorff, L., & Felt, U. (2012). Edited volumes, monographs and book chapters in the Book Citation Index (BKCI) and Science Citation Index (SCI, SoSCI, A&HCI). Journal of Scientometric Research, 1(1), 28–34.

• Nicolaisen, J. (2002). The scholarliness of published peer reviews: A bibliometric study of book reviews in selected social science fields. Research Evaluation, 11(3), 129-140.

• Shaw, D. (1991). An analysis of the relationship between book reviews and fiction holdings in OCLC. Library and Information Science Research, 13(2), 147-154.

• Torres-Salinas, D., & Moed, H. F. (2009). Library catalog analysis as a tool in studies of social sciences and humanities: An exploratory study of published book titles in economics. Journal of Informetrics, 3(1), 9-26.

• Torres-Salinas, D., Robinson-García, N., Campanario, J. M., & López-Cózar, E. D. (2014). Coverage, field specialisation and the impact of scientific publishers indexed in the book citation index. Online Information Review, 38(1), 24-42.

• White, H.D., Boell, S.K., Yu, H., Davis, M., Wilson, C.S., & Cole, F.T. (2009). Libcitations: A measure for comparative assessment of book publications in the humanities and social sciences. Journal of the American Society for Information Science and Technology, 60(6), 1083-1096.

• Zuccala, A. A., Verleysen, F. T., Cornacchia, R., & Engels, T. C. E. (2015). Altmetrics for the humanities: Comparing goodreads reader ratings with citations to history books. Aslib Journal of Information Management,67(3), 320-336.

• Zuccala, A., Guns, R., Cornacchia, R., & Bod, R. (in press, 2014). Can we rank scholarly book publishers? A bibliometric experiment with the field of history. Journal of the Association for Information Science and Technology. http://www.illc.uva.nl/evaluating-humanities/RankingPublishers(Preprint_2014).pdf