Principles of citation-based evaluation

The session is by James Pringle from Thomson Scientific (now Thomson Reuters) (a.k.a. ISI)

I have to admit to some slight cynicism around Thomson’s involvement, as they clearly have one of the richest sets of citation based data, and so I feel that they have an innately biased approach – however, James has said he is speaking (as far as he can) not as a vendor in this context (but he’s hardly likely to say bibliometrics aren’t very helpful is he?)

James is indicating that Thomson believe that citation data need to be used wisely – to inform discussion rather than to replace discussions.

James is describing citations as “User-generated content” of formal scholarly communications (ok – but using a buzzword here – but all the published papers are ‘user-generated’ as well. I would think it would be more accurate to regard it as the ‘link’ of scholarly communications, which makes using it to generate bibliometrics akin to Google’s pagerank)

However, although citation-based evaluation is fundamental to effective evaluation and management, say James, it is frequently mis-used and misunderstood in an era of democratisation – the sector needs ‘best practices’ to avoid this (apparently  “Using bibliometrics in evaluating research” by David Pendlebury lists 10 rules for using citation-based evaluation). The availability of citation information leads researchers to make their own assessments – which may not be accurate – or at the least, not uniform. It also has lead to researchers submitting ‘corrections’ to Thomson, which challenges them to respond quickly and provide more accurate information.

James outlines how use is made of citation data for research evaluation at the level of Individuals, University Departments, University Management and External Entities (such as funding bodies). These different levels have different needs and requirements, so we need to producing citation data that can serve needs across this range.

Thomson have looked at Institutional Research Evaluation workflows:

  • Identification of researchers and their work
  • Data validation and metadata management
  • Populating local systems with ongoing updates
  • Enabling access to data via internal systems and respositories
  • Data and metrics to create reports for external agencies

These are not unique to the UK – they are being tackled by institutions across the globe, although the detail is effected by national policies etc.

So, how do Thomson support research evaluation – they have a number of principals:

  • First, do no harm – encourage and develop use of best practices
  • Integrate and partner – work with institutions to define and develop integrated solutions
  • Enhance content to meet emerging needs
    • Right Journal content
      • Careful increase journals in Web of Science
    • Recognize expanding scholarly communications
      • Integrate proceedings citation content into Web of Science
    • Engage researcher community
      • ResearcherID.com and Author disambiguation
    • Engage Libraries and Research Management
      • Institutional Identifiers and Institutional Disambiguation/Unification

All this sounds OK – but I’m worried about how it joins up with activity in the sector, and by other organisations. How does ‘ResearcherID.com’ link to OCLC Identities work? It would be great to see some joined up thinking across the library/information sector on this, as otherwise we will end up with multiple methods of identification.

2 thoughts on “Principles of citation-based evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.