One of the shared goals of researchers and publishers is to make a significant and recognized impact on the body of research--and one of the ways you can understand the breadth of your impact is how often and where your work is cited.
Impact Factor, a measurement of such influence, is crucial to journal ranking and reputation. And it swings both ways, as researchers, too, have impact factors based on their research influence. In some ways, these scores are a match-making service of sorts, with journals in pursuit of high-impact research, and researchers in pursuit of high-impact journals.
Publication in such high-impact journals can affect a researcher’ s institutional evaluation and subsequent research funding. And in turn, a high-impact factor helps a journal court prestigious research and increase subscription rates. Bottom line: Impact Factor affects the financial bottom line of all parties involved.
One of the ways in which this influence or reputation is codified is through the Journal Impact Factor, a calculation based on citations. While there are many ways in which a journal’s reputation can be measured and ranked, the Journal Impact Factor is, to date, the industry-standard; it measures the frequency of articles in a journal cited over a particular period of time.
Essentially, the Impact Factor is calculated by dividing the number of current year citations to the source items published in that journal during the prior two years (or in the case of the five-year Impact Factor, five years).
So a 2019 impact factor would be calculated in the following way:
2019 impact factor = A/B
(A = the number of times that all items published in that journal in 2017 and 2018 were cited by indexed publications during 2018 and B = the total number of items published by that journal in 2017 and 2018).
Eugene Garfield, who defined the system in the 1960s, recently described it as such:
“The term ‘impact factor’ has gradually evolved to describe both journal and author impact. Journal impact factors generally involve relatively large populations of articles and citations. Individual authors generally produce smaller numbers of articles, although some have published a phenomenal number. For example, transplant surgeon Tom Starzl has co-authored more than 2000 articles, while Carl Djerassi, inventor of the modern oral contraceptive, has published more than 1300” (2006, p. 90).
Academic research and publishing seek quality content--and this content is measured largely by the number of citations.
The journal impact factor has attracted dissenters who cite inequity and the “tragedy of the commons.” Casadevella and Fang, among others, outline the ways in which the impact factor furthers inequity, narrowing research and funding to select journals and researchers. They state that “citation rate is an imperfect indicator of science quality and research,” adding that “an emphasis on citation rate as a measure of impact perversely discourages research in neglected fields that are deserving of greater study” (2014, p. 3).
Hoeffel addressed the debate in 1998, summing up the academic community’s acceptance and continued use of the impact factor:
“Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty” (1998, p. 1225).
Alternative scoring formats have arisen, such as Elsevier’s CiteScore, SCIMago Journal Rank (SJR), and Source Normalized Impact per Paper (SNIP), all of which weigh citations heavily in the calculation thereof. And while these alternatives illustrate incremental change, they have yet to disrupt the Journal Impact Factor.
So Impact Factor is a measurement that is likely to stay--and it is a measure that will affect the research landscape to come.
There’s a symbiotic relationship between publisher and researcher. When properly aligned in purpose and quality, the reputations of both can be improved.
So how are researchers and publishers aligned?
Publishers enforce quality, innovation, and impact through the following structure:
- Editorial selection for quality, innovation, and impact
- Peer review
- Editorial review and final selection
- Manuscript revision
- Final review
- Publication
Researchers experience the following arc as they publish:
- Selection of publishers for submission
- Writing and formatting to meet journal specifications
- Submission
- Revision
- Acceptance
- Publication
One can break down the selection of publishers further. In their research entitled, “Selecting an Appropriate Publication Outlet: A Comprehensive Model of Journal Selection Criteria for Researchers in a Broad Range of Academic Disciplines,” Knight and Steinbach (2008) outline the variables researchers use to decide where to submit their work. They highlight 3 categories:
- “Likelihood of timely acceptance
- Likelihood of acceptance
- Timeline from submission to publication
- Potential impact of the article
- Journal reputation (credibility and prestige)
- Journal visibility
- Philosophical and ethical issues
- Open access repositories
- Library issues
- Intellectual property/copyright issues” (p. 71).
The variables and components in selection for researchers and publishers range from research content to timeline to ethical issues. However, reputation is the one selection factor that holds a prominent position for both the researcher and publisher. We hope this helps you in your publishing journey.