Skip to Main Content

Bibliometrics & Measuring Research Output: Measures

Bibliometric measures

Definitions, examples, as well as context for appropriate and inappropriate use, are offered for the following bibliometric measures:

This content is informed by section 3.2 "Bibliometric Measures" of the White Paper, "Measuring Research Output Through Bibliometrics", which defines six common bibliometric measures used by post-secondary institutions, along with limitations of their use. For more information, see a summary of the uses and possible levels of use for each bibliometric measure.

Basket of measures

A basket of measures approach can lead to a more robust understanding of elements of research productivity and impact. Beyond bibliometric measures, other measures used to capture elements of research productivity and impact include:

  • Peer review.
  • Type and amount of intellectual property, e.g., patents, licenses, and spin-offs.
  • Type and amount of research awards and research funding.
  • Highly qualified personnel developed by a researcher or group.
  • Publication acceptance rates.
  • Altmetrics, or alternative metric data based on online conversations (such as blogs, news site, Twitter, etc,) about a published article.

Research metrics and bibliometric measures work together to form a basket of measures that can provide a broader picture than any single measure.

Relationship between measures of research productivity and impact, shown by bibliometric measures and research metrics.

Journal impact ranking

Journal impact ranking measures captures a journal’s relative importance using aggregate citation data from articles published in the journal. 1, 2

  • This measure captures the relative importance of a journal, not individual articles in the journal.
  • A widely known example of this measure is Clarivate's (Thomson Reuters’) Journal Impact Factor (JIF).


  • Researchers generally believe that the quality of an individual publication should be judged on its own merit. Therefore, individual, article-based citation counts, rather than journal-based citation counts, are preferred.

Useful For

  • Identifying relative importance of a journal.

Not Useful For

  • Identifying relative importance of individual journal articles.
  • Determining quality of individual journal articles.

1 Falagas, M. E., & Alexiou, V. G. (2008). The top-ten in journal impact factor manipulation. Archivum Immunologiae Et Therapiae Experimentalis, 56(4), 223-226. doi:10.1007/s00005-008-0024-5
2 Krauskopf, E. (2013). Deceiving the research community through manipulation of the impact factor. Journal of the American Society for Information Science & Technology, 64(11), 2403-2403. doi:10.1002/asi.22905
3 Web of Science Training. (2015). Journal citation reports: Impact factor. [Video file]. Retrieved from

Highly cited researchers

A controversial list of highly cited Sciences and Social Sciences researchers created by Clarivate (Thomson Reuters), which highlights researchers whose work represents the top 1% of researchers in a field for citations.

Useful For

  • Understanding an individual researcher’s impact as it relates other papers in the subject matter in which they have published.
  • Researchers publishing in the Sciences and Social Sciences. Excludes all other research areas.

Not Useful For

  • Comparing researchers from different fields, subjects, and departments.
  • Not relevant for researchers outside the Sciences or Social Sciences.

Proportion of international or industry collaborations

As a collaborative research measure, the proportion of international collaborations identifies publications having at least two different countries among the co-author affiliations.

As a separate collaborative measure, the proportion of industry collaborations highlights the proportion of publications having the organization type ‘corporate’ for one or more co-author affiliations.

Example: An article that has been co-authored between a local researcher and an international author, or an author having an industry affiliation. 

  • How collaboration is measured depends on the discipline being examined. 
  • As an example, the Leiden Ranking offers collaborative measures based on Web of Science data, including institutional-level impact and collaboration data.

Useful For

  • Capturing an author’s or institution’s ability to attract collaborations with industry or international colleagues.
  • Identifying potential for funding and partnership opportunities with industry.
  • Insight into the type and extent of research collaborations in publications.

Publication counts

Publication counts refer to the total count of items identified as scholarly output by an individual or group.

Example: Author X has published 19 journal articles (or books, conference proceedings, etc.).

  • Absolute publication counts are the cumulative total number of publications produced by an entity.
  • Normalized publication counts weigh an individual or institution's publication rate against the expected performance in a specific discipline. Normalized counts offer potential for more meaningful comparisons.

Useful For

  • Assessing outputs of an individual, discipline, or institution.

Not Useful For

  • Assessing quality of a work.


Citation counts

Citation counts refer to the total number of times that a given publication has been cited.

Example: Article X has been cited 11 times, by documents indexed by Scopus.

  • Citation counts are not always directly correlated to positive research impact and quality. For example, a discredited paper may receive many citations before it is retracted, or a highly cited paper may later be revealed to be a result of a high level of error.
  • Citation counts are sometimes normalized to reflect expected performance within a specific field or discipline.
  • Citations can also include self-citations. A self-citation is a citation from a citing article to a source article, where the same author name is on both the source and the citing article.There are contexts, however, where self-citations are warranted. For example, an individual researcher may have published seminal work earlier in their career, and not citing that work would be ill-advised.

Useful For

  • Measuring an element of impact of a work or set of works.

Not Useful For

  • Understanding context of the impact (positive vs negative impacts).

1 Clarivate Analytics. (2016). Indicators Glossary. Retrieved from​

Normalized citation impact

Actual citation impact (cites per paper) compared to expected citation impact (cites per paper) of a subject area globally, normalized for subject area, document type, and year.

  • A value of 1.00 indicates that the publication performs at the expected global average. A value >1.00 indicates that the publication exceeds the world average.

Useful For

  • Comparing between different subjects and samples.

Not Useful For

  • Small sets of publications, as a single highly cited paper can skew the data.


The H-index captures output based on the total number of publications and the total number of citations to those works, providing a focused snapshot of an individual’s research performance.

Example: If a researcher has 15 papers, each of which has at least 15 citations, their h-index is 15.

Useful For

  • Comparing researchers of similar career length.
  • Comparing researchers in similar field, subject, or Department, and who publish in the same journal categories.
  • Obtaining a focused snapshot of a researcher’s performance.

Not Useful For

  • Comparing researchers from different fields, disciplines, or subjects.
  • Assessing fields, departments, and subjects where research output is typically books or conference proceedings as they are not well represented by databases providing h-indices.

Top percentiles

Top percentiles (e.g., 1% or 10%) is typically a measure of the most cited documents or citations in a subject area, document type, and year.

Example: The top 1% most cited works in a specific discipline. Or, the top 10% most cited works of an institution’s publication output.

Useful For

  • “Only appropriate for large to medium size data sets as a way to measure impact by the number of works located in the top 10%.” 1 (Thomson Reuters, p. 15)

Not Useful For

  • Assessing small datasets, as significant caution should be used.

1 Clarivate Analytics. (2018). InCites indicators handbook. Retrieved from