About bibliometrics

What are bibliometrics?

Photo by sylvie charron on Unsplash

Bibliometrics measure academic impact by focusing on scholarly citation analysis within the scholarly literature.

Bibliometrics are quantitative research impact assessment tools that use citation data to indicate the likely contribution a paper or author is making to the advancement of knowledge in a particular area.

In contrast, altmetrics aim to provide qualitative analysis of research by analyzing the attention papers and other works are receiving in the broader, non-scholarly world, thereby indicating the contribution of the research to public policy, real world problems and other societal issues. Explore the ‘About Altmetrics’ section of this module to learn more.


Citation data can be located using scholarly databases such as Scopus or Web of Science, and research analysis tools such as SciVal and InCites. These databases and tools allow you to track and analyse citations by: 

  • Identifying who is citing your work.
  • Listing the number of times your publication has been cited in later publications.
  • Placing the citation count in context within a field – e.g., using measures such as Field Weighted Citation Impact (FWCI).
  • Providing other measures of your impact such as the author h-index.

Watch this video to gain an overview of bibliometrics.

Bibliometrics in under 2 minutes (1:36 mins)

Bibliometrics in under 2 minutes (1:36 min) by Leeds University Library (YouTube)

Why use bibliometrics?

Bibliometrics advantages and limitations

Simple and straightforward Compared to altmetrics which can be difficult to collect and analyse, bibliometrics have been standardised to be quick and easy to produce.
Transparency Results can be reproduced using the standardised methods.
Comparison Field-normalised metrics such as the Field-Weighted Citation Impact (FWCI) used by Scopus and SciVal, or the Category Normalised Citation Impact (CNCI) used by Web of Science and InCites, can be used to measure the impact of a journal article within the research discipline.
Reliability As a quantitative analysis tool, bibliometrics are broadly accepted as a relatively objective measure of academic impact since they demonstrate that other researchers have read and made use of your work. Additionally, the peer-reviewed scholarly environment in which they occur provides a degree of regulation.
Time span Well established with data going back many years – for example, the Web of Science databases indexes scholarly literature from 1900.
Scalable Bibliometrics application extends from analysis of large data sets for institutional purposes to micro-level analysis of a single researcher’s output.
Interconnected Bibiometrics work in concert. Combining an article’s metrics with metric data on the author, and the journal, can give an indication of the likely quality of the article.
Time lag There are timeliness issues around demonstrating scholarly influence since citations take a while to accumulate. Other researchers may read the paper promptly, but it takes time for them to make use of the research, write a paper, and get the paper published.
Bias Bibliometrics inevitably involve a bias towards older papers. Older papers tend to be more highly cited than more recent papers because they have had more time to collect citations. Because of this, there is also an inherent bias towards more established researchers. Field-normalised metrics (FWCI and CNCI) factor in the publication date when formulating the metric.
Specialisation Bibliometrics can fail to reflect high quality work published in very specialised journals. Research that is of relevance to only one country will also not be accurately measured by the basic metrics.
Referencing patterns Referencing patterns vary across disciplines with some fields having a higher rate of publication and citation.

Disciplines in STEM are comprised of research groups rather than individual researchers, and the members of the group naturally reference the work of their colleagues in the group. Basic research will also have a higher rate of citation as it is referenced both within the field and by relevant applied research fields (e.g. both biochemistry and medicine).

Other fields, such as the humanities, social sciences, art and design, have different publishing patterns and may not attract many citations. The problem is exacerbated in very specialised fields with few researchers.

Algorithm limitations The H index, which requires high productivity in addition to impact, may not reflect the impact of seminal work by a key author who has relatively few outputs.
Publication format Research outputs such as books, book chapters, reports and creative works are not covered well by scholarly/citation databases which have a strong focus on journal articles.
Purpose Bibliometrics measure the impact of researchers and outputs on further research, but it is not easy to quantify the nature of that impact.

A possible problem may include a paper receiving many citations for negative reasons, although this would be rare. More commonly, papers citing review articles and methods papers which are unlikely to be central to the exact topic under discussion.


Test your knowledge


Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Research and Writing Skills for Academic and Graduate Researchers Copyright © 2022 by RMIT University is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book