Skip to Main Content

Bibliometrics & Measuring Research Output: Citation-tracking

Limitations of citation-tracking databases

Citation-tracking databases, such as Scopus, Web of Science, and Google Scholar, are used extensively to collect and report a range of bibliometric measures. Citation-tracking databases are susceptible to limitations based on factors such as:

This content is informed by section 3.1.1 "Limitations of citation-tracking databases" of the White Paper, "Measuring Research Output Through Bibliometrics".

Discipline variations

  • Research output, productivity and impact will vary between and across disciplines. This means the relevance and usefulness of bibliometrics depends on the discipline. 1, 2
     
  • Citation-tracking databases do not capture the differences in disciplines well, both in the bibliometric data they index and their coverage.
     
  • The measure chosen should reflect what is the best way to capture impact in the specific discipline.

 

Publication behaviours across disciplines, with Neurosciences and Life Sciences having high frequency of publications, length of reference lists, and number of co-authors. Further, Mathematics, Computer Sciences, Arts and Humanities having low frequency of publications, length of reference lists, and number of co-authors.

 


1 Federation for the Humanities and Social Sciences. (2014). The Impacts of Humanities and Social Science Research: Working Paper. Retrieved from http://www.ideas-idees.ca/sites/default/files/2014-10-03-impact-project-draft-report-english-version-final2.pdf
2 Archambault, E., & Lariviere, V. (2010). The limits of bibliometrics for the analysis of the social sciences and humanities literature. World Social Science Report : Knowledge Divides. (pp. 251-254). Paris: UNESCO Publishing and International Social Science Council.

Sample size

To be reliable, a large sample size is required in a number of measures, and percentiles may be a more suitable approach with small sample sizes depending on the context.

  • Normalized measures may be subject to outliers, as it is difficult to normalize for outlier effects, e.g., a single highly cited publication may draw the average up somewhat artificially.
     
  • This is particularly true where research units or sample are small, and in low-citation culture disciplines.1, 2, 3 This is also true at the institutional level, or in disciplines that do not publish frequently.

1 Vieira, E. S., & Gomes, J. A. N. F. (2010). A research impact indicator for institutions. Journal of Informetrics, 4(4), 581-590. doi:10.1016/j.joi.2010.06.006
2 Abramo, G., D’Angelo, C. A., & Viel, F. (2013). The suitability of h and g indexes for measuring the research performance of institutions. Scientometrics, 97(3), 555-570. doi:10.1007/s11192-013-1026-4
3 Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Leiden Manifesto for research metrics. Nature, 520, 429-431.

Scope and data accuracy

Citation-tracking databases calculate bibliometric data based on the items they index, and each database has differing coverage and a unique data universe. This means:

  • Bibliometric measures cannot offer a comprehensive indication of research productivity, as no citation-tracking database is comprehensive.
  • It is problematic to compare data over time, even within the same citation tracking-database, as the methodology used to gather the data and the data sets themselves are constantly evolving.

  • Citation-tracking databases do not have good coverage of non-English research, interdisciplinary research, or research of regional importance, and using citation-tracking databases to assess these research outputs will under-represent actual output.

  • Bibliometric measures offered by one source may not be offered by another making it difficult to validate the data in each database.

Attributing authorship

A limitation of citation-tracking databases is the different ways authorship is attributed. Problems can stem from data errors, name ambiguity, and how multi-authored articles are attributed.

  • For example, misspellings of author names and errors in institutional attribution are sometimes found in citation-tracking databases. 
  • To reduce name ambiguity, all authors should define their identity convention as an author early, and use that convention systematically.

  • Citation-tracking databases attribute authorship of multi-authored articles differently, as it may be attributed to all of a publication's authors equally (full counting), or by giving relative weights to authors in collaborative publications (fractional counting).

Gender bias

Citation-tracking databases are also susceptible to gender bias, which limits the reliability and utility of citation-based measures. Evidence shows that:

  • In countries producing the most research, "all articles with women in dominant author positions receive fewer citations than those with men in the same positions." 1
     
  • Women tend to publish in predominantly domestic publications compared to their male colleagues, limiting potential international citations.1
     
  • Authors tend to cite work of individuals of the same sex, perpetuating gender bias in male-dominated fields. 2, 3
     
  • The habit of self-citing is more common among men than women. 3
     
  • Women are particularly disadvantaged by gender-based citation bias early in their career, a limitation which persists throughout an academic's career. 2
     
  • Women experience a “co-author penalty", where women with a higher proportion of co-authored papers are less likely to receive tenure. For men, whether a large fraction of their papers are sole or co-authored has no impact on their tenure prospects. 4

 


1 Larivière, Ni, Gingras, Cronin & Sugimoto, 2013
2 Ferber & Brun, 2011

3 Maliniak, Powers, and Walter, 2013
4 Sarsons, 2015

Effects of time

Time affects both a researcher's impact and how their impact is understood.

  • Citations are time-dependent and a researcher’s impact will change over time. More established researchers will naturally have higher citation counts than researchers early in their career, regardless of the quality of their research or findings.
     
  • The time required for research impact to be understood also varies by discipline. Therefore, using citations to understand research impact must reflect the citation culture of the discipline(s) being assessed.
     
    • Example: A three-to-five year window from publication time is recommended as the ideal choice for citations within the natural sciences and engineering,1 but in anatomy it can take fifty years or longer for a publication’s findings to be analyzed.
       
  • Citation counts should be normalized over time to address this issue. However, even with field normalization efforts, some authors suggest citations within one to two years of publication cannot be counted accurately. 2
     

1 Council of Canadian Academies. Expert Panel on Science Performance and Research Funding (Ed.). (2012). Informing research choices: Indicators and judgement. Ottawa: Council of Canadian Academies.
2 Wang, J. (2013). Citation time window choice for research impact evaluation. Scientometrics, 94(3), 851-872. doi:10.1007/ s11192-012-0775-9