Responsible Metrics

Responsible Metrics

Research assessment is an essential activity undertaken across the University. It can take place at various levels, including the whole university, faculty, research institute, research centre, department, research group, individual researcher, or individual research output.

The following guide outlines best practice for research assessment and the use of quantitative indicators.

1. Purpose

This document is a guide to responsible research assessment for the SHU community. It provides a set of principles outlining good practice. These principles reinforce the key role of expert judgement and support an inclusive and transparent process to research assessment, respectful of researchers and of the plurality of research. The principles draw on the recommendations of the Metric Tide [1] report the Leiden Manifesto [2] and the San Francisco Declaration on Research Assessment (DORA) [3].  The University’s Creating Knowledge Pillar Board has agreed that from May 2019 SHU will be a signatory of DORA as part of a continuing commitment to moving to the responsible use of research metrics [4].

1.1. What are quantitative indicators?

1.1.1. Citation-based metrics

Many of the quantitative indicators used in research assessment are citation-based bibliometric indicators such as citation counts; journal impact factors (JIFs); and the h-index. These are derived from the data found in Web of Science, Scopus, or in Google Scholar. These metrics are displayed in many commonly used sources, such as on publisher's journal web sites and in research systems such as Symplectic Elements. It is therefore important that all staff involved in research, and not just those directly involved in the assessment of research, have an understanding of these indicators and their responsible use.

1.1.2. Altmetrics

Alternative metrics ('altmetrics') are a relatively new kind of indicator which provide information about attention to research outputs in social media such as Twitter and also information about captures, shares and number of views and downloads. There are still many uncertainties and concerns about these developing metrics, including about their reliability. For this reason, the UK Forum for Responsible Research Metrics [5] recommends that altmetrics should not be used in REF style evaluations of outputs although there may be some scope for their use in assessment of impact.

2. Principles

2.1. Research assessment must rely on expert judgement

  • Appropriate quantitative indicators in certain subject areas can be used to support assessment, but should never supplant qualitative expert judgement and peer review.

2.2. Diversity should be recognised and accounted for

  • Research assessment approaches should recognise the plurality of research and acknowledge that indicators will not serve all disciplines equally.
  • The diverse research missions of individual researchers and of research groups should be taken into account.
  • Different publication and citation practices across fields should be recognised. Best practice is to use subject/field normalized indicators.
  • The appropriateness of any indicators used to non-English language research should be considered.

2.3. Processes should be open and transparent

  • Indicators are rarely an in-house product. This can make it difficult to be open and transparent about their calculation and to verify the data from which they are generated. However, it should be ensured that the most reputable and robust indicators as possible are used.
  • Internal assessment processes and methods should be open, transparent and documented.
  • Where the work of researchers is being assessed, they should be able to check that their outputs have been correctly identified.

2.4. Misplaced concreteness and false precision should be avoided

  • Metrics should only be used where their strengths, weaknesses and limitations are understood and where placing undue significance on quantitative differences is avoided.
  • Caveats should be included with research assessment data and reports.
  • Where quantitative measures are considered, best practice is to use multiple indicators to help provide more robust information.
  • Regular reassessment of any indicators used should be undertaken.

2.5. The systemic effects of assessment and indicators should be recognised

  • Using indica­tors can change the system through the incentives they establish. Any such influences should be anticipated and mitigated as far as possible.

3. Examples: how to apply these principles

3.1. Assessing individual research outputs

  • These should be assessed primarily by expert qualitative judgement of the output, for example using the REF approach to assessment based on originality, significance and rigour.
  • Citation counts should only be used if interpreted in the light of disciplinary norms and with an understanding of the factors which affect citation counts, including paper, journal and author related factors [6]. For example, an article in an English-language journal, written by several authors in an international collaboration is likely to be cited more often than an article written by a single author in a journal published in a language other than English.
  • Metrics designed for the assessment of whole journals are not good measures for assessing individual outputs and should not be used for this purpose. The best-known of these metrics are the journal impact factors available from Journal Citation Reports [7] (based on Web of Science data); and SCImago Journal Rank (SJR), Source Normalised Impact per Paper (SNIP) and CiteScore (all based on Scopus [8] data). It is important to recognise that an article in a journal with a high SNIP is not necessarily excellent and an output in a journal with a low SNIP may be outstanding.
  • Journal impact factors are an inappropriate indicator of the citation impact of an individual journal article. The distribution of citations over the articles in a journal is highly skewed [9], resulting in journal impact factors not being representative of the citation impact of a typical article in a journal.

3.2. Assessing a researcher's body of work

  • This should be assessed by expert qualitative judgement of the researcher's portfolio and with their personal research mission in mind.
  • Criteria used for academic recruitment, promotion and review should be founded in expert judgement reflecting the academic quality of outputs and the wider impact of the work.
  • The publication and citation practices in the researcher's field should be taken into account.
  • The use of Symplectic Elements as the source of output data for research assessment and management is recommended as researchers will be able to check and maintain their outputs and the source of this data will be transparent.
  • A researcher's h-index is an indicator purporting to assess the citation impact and productivity of a researcher. However, the h-index is not appropriate in disciplines where outputs are not predominantly journal articles, such as in the humanities. It is also influenced by varying citation patterns between fields, the researcher's career stage, and the source of the data (e.g. Scopus, Web of Science or Google Scholar). H-indexes should only be used when these factors and limitations are taken into consideration.

3.3. Assessing journals, for example when choosing where to publish

  • Choosing a journal in which to publish should involve a consideration of many factors, including the aim of publication and the scope, content, audience, quality, discoverability, review process and prestige of the journal. Quantitative indicators can be used to supplement this process.
  • Journal impact factors available from Journal Citation Reports are not normalised by field and should not be used to compare journals across subjects.
  • The Source Normalised Impact per Paper (SNIP) metric is subject normalised and can be used to compare journals across fields/disciplines. SNIPs are available from the Scopus database.
  • Distinguishing journals based on very small differences in the journal impact factors, for example a difference in the second and third decimal places only, is not meaningful. Looking at the quartile in which a journal appears in its subject area can be a helpful approach. The quartile for a journal based on its journal impact factor rank can be found in Journal Citation Reports (based on Web of Science [10] data).

[1] The Metric Tide: report of the independent review of the role of metrics in research assessment and management.

[2] Diana Hicks, Paul Wouters, Ludo Waltman, Sarah de Rijke & Ismael Rafols (2015). The Leiden Manifesto for research metrics: use these 10 principles to guide research evaluation. Nature, 520, 429–431.

[3] The San Francisco Declaration on Research Assessment (DORA).

[4] The Library Research Support page on DORA at SHU

[5] Metrics in REF 2021: advice from the UK Forum for Responsible Research Metrics.

[6] Tahamtan, I., Safipour Afshar, A. & Ahamdzadeh, K. (2016). Factors affecting number of citations: a comprehensive review of the literature. Scientometrics, 107 (3), 1195-1225.

[7] Journal Citation Reports database produced by Clarivate Analytics (previously Thomson Reuters). Subscription service available from the SHU Library Gateway

[8] Scopus database produced by Elsevier. Subscription service available from the SHU Library Gateway

[9] Lariviere, V., Kiermer, V., MacCallum, C. J., McNutt, M., Patterson, M., Pulverer, B., ... & Curry, S. (2016). A simple proposal for the publication of journal citation distributions. Biorxiv.

[10] Web of Science database produced by Clarivate Analytics. Subscription service available from the SHU Library Gateway