Rethinking Academic Gatekeepers: The Collapse of Journals, Metrics, and the Future of University Knowledge

Rethinking Academic Gatekeepers: The Collapse of Journals, Metrics, and the Future of University Knowledge


I. Introduction: The Changing Landscape of Academic Knowledge

Academic knowledge is undergoing a profound transformation. Long-standing pillars of scholarly communication—prestigious journals, subscription paywalls, and citation-based metrics—are cracking under the weight of digital innovation and a push for openness. The way research is created, evaluated, and shared is shifting from a closed, gatekept model to a more open and dynamic ecosystem. This changing landscape is visible everywhere: in the growth of open-access publications, the rise of preprint servers and academic social networks, and the widespread questioning of metrics like the impact factor and h-index. The “publish or perish” paradigm that dominated late 20th-century academia is now being re-examined in light of 21st-century values of transparency, accessibility, and equity.

This article explores the collapse of traditional academic structures and analyzes the ramifications for universities. We will journey through the historical rise of academic gatekeeping and then examine how these once-sturdy structures are eroding. In their place, new models—open access publishing, alternative metrics, and novel incentives—are emerging.

The tone is reflective and forward-looking: the goal is not to herald chaos, but to understand how dismantling old gatekeepers might lead toward a more equitable knowledge ecosystem. Throughout, we will consider the structural shifts as well as the human dimensions: the professors, librarians, students, and administrators navigating this academic upheaval. Using metaphors and systems-level thinking, we frame the scholarly world as an ecosystem in flux, where change brings both challenges and opportunities.


II. Historical Context: The Rise of Academic Gatekeeping

To appreciate the current upheaval, we must first understand how traditional academic gatekeeping arose. For over three centuries, scholarly journals have been the fortresses of knowledge, controlling what research is validated and disseminated. The first scientific journals of the 17th century (such as the Philosophical Transactions of London, founded in 1665) established the template: experts (editors and peer reviewers) would decide which findings merited publication.

Over time, this gatekeeping function became formalized as peer review, turning journals into the arbiter of scientific quality and credibility. By the mid-20th century, publishing in respected journals had become the sine qua non of academic success. Universities and funding bodies increasingly used publication records as a proxy for a researcher’s contributions.

In parallel, the subscription model took hold. Academic journals, once modest society publications, evolved into a big business dominated by a few commercial publishers. Companies like Elsevier, Springer, Wiley, and Taylor & Francis built profitable empires by selling bundled journal subscriptions to university libraries. Through the late 20th century, journal prices rose steeply, outpacing inflation and straining library budgets.


Article content

Yet because publishing in these journals was critical for academic careers, universities felt compelled to pay: a captive market. By the 2010s, the five largest publishers controlled over half of all published papers and enjoyed profit margins exceeding 30-40%. For example, Elsevier’s scientific publishing division routinely reported profits around 37% – margins on par with or higher than tech giants like Apple. This oligopoly of publishers created what critics call an affordability crisis, locking up knowledge behind expensive paywalls that only well-funded institutions could afford.

Another pillar of academic gatekeeping has been the use of bibliometric indices to measure impact. In the 1960s, information scientist Eugene Garfield introduced the journal impact factor primarily as a tool for librarians to identify important journals. However, this metric—measuring average citation counts per article in a journal—soon took on outsize importance as a shorthand for journal prestige.

Faculty hiring and promotion committees began to equate publishing in high-impact factor journals with research excellence. Later, in 2005, physicist Jorge Hirsch proposed the h-index as a simple way to quantify an individual scholar’s impact by balancing productivity with citation count. These measures were intended to be helpful heuristics. Yet over time they turned into gatekeeping mechanisms of their own, governing academic careers and institutional reputations.

A high h-index or publications in top journals became tickets to jobs, grants, and tenure, while those who fell outside the metrics could be sidelined. This metric-driven culture reinforced the power of established journals and perpetuated a cycle: researchers felt pressure to conform to what journals and metrics rewarded, potentially at the expense of creativity or risk-taking.

By the turn of the millennium, academia’s knowledge ecosystem was firmly structured around these gatekeepers. Journals controlled what was published and who could read it; subscription fees controlled who could access knowledge; and citation metrics controlled how careers advanced.

This system did produce a vast expansion of scientific output and provided some measure of order and quality control. But it also centralized power in ways that would sow the seeds of discontent. Knowledge had become enclosed—confined within paywalled journals and distilled into numerical scores. The stage was set for disruption.

III. The Erosion of Traditional Structures

In recent years, the cracks in these traditional structures have widened into clear fractures. A convergence of factors—technological, economic, and cultural—has led many to conclude that the old model is no longer sustainable. Critics now argue that the scholarly journal system is “fundamentally broken” and in need of disruptive change. What exactly is eroding, and why?        


Journals and Paywalls Under Siege: The economic strain of journal subscriptions reached a tipping point in the 2010s. University libraries faced unsustainable costs as publishers kept raising prices (often far above inflation) for bundled subscriptions. This led to high-profile showdowns: for example, in 2019 the University of California system (which produces nearly 10% of U.S. research output) made headlines by cancelling its Elsevier subscriptions in protest. UC demanded that publicly funded research be made freely accessible, and it balked at Elsevier’s proposal to charge both subscription fees and separate open-access publishing fees, a double payout that would “result in much greater cost” and higher profits for the publisher. “Knowledge should not be accessible only to those who can pay,” declared Robert May, chair of UC’s Academic Senate, underscoring that open access is essential to the university’s mission.

Similar battles played out globally: library consortia in Germany, Sweden, and elsewhere cancelled big deals with publishers, while national funders threw weight behind open-access mandates (as we will see in the next section). These confrontations signaled that the subscription model’s legitimacy was waning. When even elite universities refuse to renew journal contracts, it suggests the paywall paradigm is collapsing under moral and financial pressure.

The interplay of cost, functionality, and reliability issues creates a vicious cycle that is undermining the traditional scholarly publishing system. The erosion is not only about money—it’s also about credibility and function. As depicted in the above systems-diagram, three interlinked crises afflict the old model.

  • First, an affordability crisis: profit-maximizing journals overcharge libraries and drain resources, “overcharg[ing] institutions by a factor of up to tenfold” compared to what a fair market might require. This leaves universities with little budget to invest in modern scholarly infrastructure.
  • Second, a functionality crisis: despite huge profits, journals have been slow to innovate. Peer review remains labor-intensive and error-prone, article submission pipelines are often cumbersome, and crucial tools for reproducibility (sharing data, code, peer review transparency) are lacking. Researchers end up wasting time on antiquated processes, and the system offers “missing support for scrutiny,” meaning it’s hard to thoroughly vet and reuse research findings.

This leads to the third problem, a replicability crisis: mounting evidence shows that many published results, even in top journals, cannot be reproduced or are less reliable than expected. Paradoxically, the prestige race (driven by journals and metrics) encourages journals to favor flashy, novel results over solid, confirmatory science, thereby undermining reliability.

These three crises fuel each other in a vicious cycle: exorbitant costs prevent investment in better tools, poor tools contribute to unreliable science, and sensational unreliable findings feed the prestige (and pricing) of certain journals, which then further inflate costs. In short, the very structures that were supposed to ensure quality and dissemination are failing on both counts—they restrict access and sometimes compromise quality.

The Decline of the H-index and Traditional Metrics: Alongside the crumbling journal system, we are witnessing a collapse in the authority of traditional impact metrics. A growing chorus within academia points out that indices like the impact factor and h-index, once taken as gospel, are deeply flawed measures of a scholar’s worth or a work’s value. Even Jorge Hirsch, the inventor of the h-index, has warned that it can “fail spectacularly and have severe unintended negative consequences” if used in isolation.

One striking study in 2021 found that the correlation between a physicist’s h-index and their peer-recognized excellence (measured by prestigious awards) has plummeted to essentially zero in recent years. From 1990 to 2010, the h-index somewhat tracked scientific recognition (Kendall’s tau correlation ~0.33), but by 2019 that link had vanished – a result of changing authorship patterns (e.g. massive multi-author papers) and gaming of the system.

In other words, the h-index no longer reliably distinguishes true impact: a researcher can accumulate a high h-index through dozens of minor contributions to big team projects, while genuinely field-changing scientists might be undervalued. Similarly, the journal impact factor has been denounced as “an inappropriate measure” of individual quality.

The San Francisco Declaration on Research Assessment (DORA) in 2013 formally called for abandoning the use of impact factors and simple citation counts in hiring and promotion decisions. Many organizations and universities have since signed on. The movement for more holistic research evaluation has gained momentum: in 2021, Utrecht University in the Netherlands became one of the first to officially drop impact factor and other metrics from its faculty assessments, choosing instead to consider qualitative contributions, open science practices, and teamwork. “Impact factors don’t really reflect the quality of an individual researcher,” explained a Utrecht dean, reflecting a “strong belief that something has to change”. When a major research university abandons a metric that had been a cornerstone of academic CVs for decades, it’s a clear sign that the old metric regime is disintegrating.


In sum, the traditional system of academic publishing and evaluation is eroding on multiple fronts. The gatekeepers are losing their gatekeeping power. Paywalls are increasingly seen as barriers to be overcome, not accepted. The authority of journals is questioned both on cost and quality grounds. And the once-dominant metrics that hierarchized scholars are being unmasked as simplistic and sometimes counterproductive. These changes did not occur overnight—but a steady groundswell over the past two decades has reached a tipping point. We are witnessing what might be called an “academic paradigm shift”: the collapse (or at least the profound reform) of how scholarly knowledge is curated and valued.

IV. The Rise of Open Access and Alternative Metrics

As the old system falters, new models are rising to take its place. Open access (OA) publishing is at the forefront of this transformation, fundamentally reimagining how knowledge is shared. Alongside it, alternative metrics and evaluative frameworks are emerging to capture the impact of research in more nuanced ways than a single number. These innovations represent a more open, inclusive approach, aiming to democratize knowledge and reward a broader range of contributions.

Open Access: Toward Unrestricted Knowledge. The open access movement gained serious momentum in the early 2000s as the internet made distribution of articles essentially cost-free. The principle of OA is simple but powerful: research articles should be available to readers without paywalls, either immediately upon publication or after a short delay, and ideally with reuse rights. Over the past two decades, the growth of open access has been remarkable. By 2020, over 56% of all academic articles were available in some form of open access. This share has risen steadily each year; in 2011 it was only ~36%, and back in 2000, under 20% of articles were OA.

In fields like biomedical sciences and physics, freely accessible preprint servers (arXiv, bioRxiv, medRxiv, etc.) and repositories have normalized the idea that new findings can be shared with the world before peer-reviewed journal publication. At the same time, major research funders have pushed journals to allow or require open access.

In 2018 a coalition of European funding agencies announced Plan S, declaring that from 2021 onward, any research they fund must be published in compliant open access platforms or journals. This assertive policy sent a strong signal that closed access would no longer be tolerated for publicly funded science. Governments and funders in other regions echoed the sentiment. The result has been a surge in both fully open access journals (where all content is free to read) and hybrid models where subscription journals allow individual articles to be made open if authors or institutions pay an upfront fee.

Importantly, open access is not just an ideological stance; it is changing the incentive structure of publishing. In a subscription world, journal revenues came from readers (or libraries), but in an OA world, the revenue often comes from authors (via Article Processing Charges, or APCs) or from subsidies.

This shift has empowered some new entrants: organizations like the Public Library of Science (PLOS) and BioMed Central pioneered high-quality journals that are free to read, funded by publication fees or philanthropy. Even the legacy publishers have had to adapt—Springer Nature reported that by 2023, 50% of their research articles were published as open access. We now see consortial “read-and-publish” agreements where universities pay a lump sum that covers both subscription access for remaining paywalled content and the APCs to publish their researchers’ work openly. The overall trend is clear: open access is becoming the default expectation. Knowledge, like a public good, wants to circulate freely. And indeed, studies confirm an “open access citation advantage” – openly available articles receive about 18% more citations on average than similar closed articles, suggesting that wider accessibility boosts scholarly impact as well as public engagement.


Article content

However, the rise of open access comes with new challenges and models (which we will discuss more in Section VI). It’s not a panacea; rather, it’s part of a broader cultural shift. Along with open access to literature, we see movements for open research data, open-source analysis code, and open peer review. Collectively, this is often termed Open Science, aiming to make all aspects of research transparent and accessible.

Universities are increasingly establishing open access repositories for their faculty’s publications and adopting policies that encourage or require depositing papers in these repositories. The arc of progress points toward a future where the fruits of academic research are globally and freely available, leveling the playing field for readers from all institutions and countries. The traditional journal, once the exclusive gateway, is now one node among many in a more decentralized knowledge network.

Alternative Metrics and New Evaluation Paradigms: Alongside opening access, the scholarly community is rethinking how to measure impact and quality. A consensus is emerging that no single metric can capture the multifaceted contributions of researchers. This has led to the development of altmetrics (alternative metrics) and qualitative evaluation frameworks. Altmetrics attempt to gauge the broader ripple effects of a piece of research by looking at online indicators: how often an article is viewed, downloaded, mentioned in news or social media, bookmarked, or referenced in policy documents and blogs. These measures can capture immediate interest and societal impact in ways that citations (which accumulate slowly and only in other academic papers) cannot. For instance, an important climate change study might be shared thousands of times on Twitter or picked up by newspapers and NGOs – altmetrics would reflect this influence, whereas traditional metrics might not register it for years or at all.

Many journals and institutional repositories now display an “Altmetric score” or similar on articles, represented by a colorful donut chart, to visualize this attention. While altmetrics are still evolving and can themselves be gamed or misinterpreted, they represent an effort to broaden the notion of impact beyond the ivory tower.

Moreover, a number of universities and funding agencies are implementing narrative-based and portfolio approaches to researcher evaluation. Instead of counting papers and citations, they ask scholars to provide a narrative of their most significant contributions, which could include mentorship, software creation, community outreach, or interdisciplinary projects. There is also a push to value open science practices: sharing data and code, publishing in open forums, engaging in peer review and replication studies. For example, when Utrecht University revamped its hiring and promotion criteria, it explicitly emphasized open science and teamwork as key factors, effectively rewarding researchers for behaviors that benefit the collective enterprise of knowledge.

International initiatives are reinforcing this change. DORA (the San Francisco Declaration) we mentioned is one; more recently in 2022, a coalition of research organizations released the Leiden Manifesto and the EU Agreement on Reforming Research Assessment, which call for ending the misuse of metrics and instead evaluating quality, impact, and engagement on a case-by-case basis. As Bernard Rentier, a leader in open science, aptly put it: “It will never be possible to harmoniously implement open science without a universal consensus on a new way of evaluating research and researchers”. In other words, open practices and evaluation reform must go hand in hand. We cannot free knowledge if we are still rewarding academics solely for hoarding prestige in closed venues. A new balance is needed.

The seeds of alternative metrics and evaluation are still growing, but we already see some tangible changes. Promotion committees increasingly consider citation context (not just counts) and seek external letters that comment on quality over quantity. Funding agencies might ask for an applicant’s top 5 most meaningful outputs rather than a full publication list.

Platforms like ResearchGate, arXiv, and even LinkedIn allow researchers to showcase engagement and reach (views, downloads, discussions) that never show up in a h-index. While no one expects a single altmetric score to replace the h-index, the mindset is shifting to pluralistic assessment: using many indicators and expert judgment in concert. This more nuanced approach reduces the risk of perverse incentives that the old metrics created (such as salami-slicing research into least-publishable units, or chasing hot trendy topics for citations). It aligns evaluation with the values of open, collaborative scholarship.

In summary, the rise of open access and alternative metrics represents a hopeful counterpoint to the collapse of traditional structures. They are building new channels and norms on the foundations of the old. Knowledge is flowing more freely than ever, and impact is being recognized in richer ways. These developments are poised to profoundly affect how universities operate, as we explore next.

V. Implications for Universities

Universities stand at the nexus of this transformation. As the primary producers and consumers of scholarly knowledge, they must adapt to the collapse of traditional structures and the advent of new models. The implications extend to every facet of the university’s mission: research, teaching, funding, and public service. Here, we consider how universities are responding and what challenges and opportunities they face in this evolving landscape.


Article content

Redefining Libraries and Budgets: University libraries have been on the front lines of the access wars. In the subscription era, libraries devoted huge portions of their budgets to licensing journals and databases. As those paywalls fall (whether through cancellations or open access deals), libraries are redefining their role. The money once spent on subscriptions is increasingly being redirected towards supporting open access publishing. This can take the form of contributing to consortia that pay publishers lump sums to make all member output open (the “read-and-publish” agreements), or directly funding faculty publication fees in open access journals.

Libraries are also investing in institutional repositories and digital infrastructure to host research outputs locally, ensuring that even if a researcher publishes in a traditional journal, a version of the paper can be made available for free via the university’s repository (so-called “green open access”).

The power dynamic is shifting: instead of being beholden to commercial publishers, libraries are asserting their leverage as custodians of university intellectual output.

The University of California’s bold stance against Elsevier in 2019, for example, was a wake-up call that libraries can negotiate from a position of principle and strength, not just succumb to price hikes. In the long run, universities could save money if the open access model removes duplicative subscription costs—though in the short term, they often find themselves paying for new open access fees while still maintaining some subscriptions, a tricky period of transition.

Faculty Rewards and Tenure Policies: Perhaps the most sensitive implication is how universities evaluate their researchers. The collapse of metrics like the h-index and the movement away from journal prestige requires universities to update promotion and tenure criteria. This is no small task: academic careers have for generations been built around accumulating high-impact publications and citations.

Universities now face the challenge of crafting better evaluation frameworks that might include qualitative assessments, peer testimonials, and diverse outputs. Some forward-thinking institutions have begun this process. As noted, Utrecht University now looks at a scholar’s commitment to open science, collaboration, and societal impact as part of their hiring and promotion considerations.

Others are experimenting with what’s called a “holistic academic portfolio”, where teaching, mentoring, software development, community engagement, and other contributions sit alongside publications as valued components. This broadening of criteria can significantly change researcher behavior: if depositing data sets or writing public-facing articles is valued, researchers are more likely to do these things (whereas previously those might be seen as distractions from paper-counting). The elimination of rigid metrics can also reduce the pressure to churn out papers at the cost of quality of life. Younger academics, in particular, have reported burnout from trying to meet escalating quantitative targets; a more balanced approach could improve mental health and creativity in the research community.

Innovation in Publishing and Peer Review: Some universities are taking a direct role in shaping new forms of publishing. University presses and library-led publishing initiatives are launching open access journals and platforms for open peer review. For example, there is a trend of universities hosting “megajournals” or repositories where faculty can post working papers, which are then peer-reviewed in the open or evaluated post-publication. The idea of overlay journals has emerged: these are curated selections of articles that are already available as preprints, essentially layering the editorial function on top of open repositories.

Such models diminish the monopolistic control of traditional journals. Universities, through their presses or consortia, could become important publishers in their own right, with an emphasis on academic community governance rather than profit. This re-invests control in the scholarly community and aligns with the suggestion of creating a “decentralized, resilient, evolvable network” for scholarly communication.

The 2023 paper by Brembs et al., for instance, called for redirecting money from legacy publishers into open infrastructure governed by academia itself. If universities heed this call, we could see a renaissance of non-profit publishing, where the reputational capital of universities (and their faculty expertise) is leveraged to run journals and platforms that compete with commercial outlets, but without the same paywalls and costs.

Curriculum and Student Access: The implications aren’t limited to research. University teaching and student learning benefit greatly from the open knowledge movement. When more research articles and data are freely available, students gain access to cutting-edge findings without encountering paywall barriers. This can enrich the curriculum, allowing professors to assign the latest literature regardless of whether the library has a subscription. It also empowers students to conduct research and literature reviews more effectively.

Universities are increasingly adopting open educational resources (OER) – freely accessible textbooks and course materials – in parallel with open research. The philosophical alignment is clear: if we want an equitable knowledge ecosystem, it must extend to how we educate the next generation. Students at well-resourced universities and those at smaller colleges or in developing countries should all be able to read the same articles.

By embracing open access, universities amplify their global impact and uphold the ideal of knowledge as a public good. We are already seeing, for example, major universities partnering with initiatives like the Open Textbook Library or UNESCO’s OER network, signaling that openness is permeating all levels of academia.

Global and Equity Considerations: As traditional gatekeepers fade, universities must also navigate their role in a global context. The old system was often criticized for reinforcing inequalities: wealthy institutions could pay for journals and APCs, and researchers from English-speaking or high-income countries dominated authorship in top journals (often excluding voices from the Global South).

A more open system has the potential to democratize this landscape. Universities in lower-income regions stand to gain from open access (as readers) and also from new publishing models that don’t require hefty fees (as authors). We might see more South–North collaborations and a diversification of recognized scholarship as access barriers fall.

However, there’s a caution: if open access is primarily achieved through APCs paid by authors, it could create a new inequity where those with grants can publish more easily than those without. Universities and funders will need to devise mechanisms (waivers, subsidies, publishing funds) to ensure researchers from less-funded institutions can also publish in open venues. Many universities are indeed establishing open access funds for their faculty, and international agencies are looking at ways to support journals that don’t charge fees to authors or readers. The principle of equity must be actively safeguarded even as we celebrate the collapse of the old barriers.


In essence, the implications for universities are transformative. They have the opportunity to reclaim agency over scholarly communication: to align it with academic values rather than the profit motives of publishers, and to align evaluation with the true mission of academia (advancing and disseminating knowledge) rather than narrow prestige metrics.

Universities that adapt nimbly will likely enhance their reputation and fulfill their public mission more effectively. Those that cling to the old ways may find themselves on the wrong side of history, much as any institution that resisted the printing press or the internet eventually had to catch up. Yet, transitioning to new models is complex, and as we discuss next, there are challenges to be navigated.

VI. Challenges and Considerations

The collapse of traditional academic structures and the rise of new models bring not only excitement but also a host of challenges. Change at this scale is inherently disruptive, and even positive transformations can have unintended side effects. In moving toward a more open and equitable knowledge ecosystem, universities and the scholarly community must contend with practical and ethical questions. We highlight some key considerations and challenges:

Sustainable Funding Models: If paywalled subscriptions decline, how will scholarly publishing be sustainably funded? Open access is not free to produce; there are costs for managing peer review, editing, and maintaining platforms. The prevailing model of Article Processing Charges has drawn criticism for simply shifting the burden – from readers to authors (or their institutions). This creates a risk that underfunded researchers or universities can’t afford to publish, a “pay to publish” barrier that could replicate inequality in a new form.

Indeed, a study of global trends found that as open access publishing grows, there’s a concern it could lead to “closed research” for those who can’t pay APCs. To address this, various approaches are being tried: consortium-based funding (like SCOAP3 in high-energy physics, where libraries pooled funds to make journals free for authors), institutional subsidies for non-profit journals, or new models like diamond open access (journals that charge neither readers nor authors, often funded by universities or governments).

There is also discussion of shifting big subscription budgets into centralized platforms that serve all (for example, a global arXiv-like repository for all fields). While no one model has universal consensus, it’s clear that careful thought is needed to avoid replacing one form of exclusivity with another. Universities and consortia might negotiate not just open access, but also cost controls and transparency from publishers to ensure reasonable pricing.

The community could also push for more volunteer-driven or cost-efficient publishing frameworks – the success of platforms like arXiv (operating on a few hundred thousand dollars a year) shows that radical cost reduction is possible when profit is removed from the equation.

Maintaining Quality and Peer Review: Traditional journals, for all their flaws, have provided a form of quality control via peer review and editorial curation. If “the whole concept of a journal is kind of dead” as one expert quipped, how do we ensure the reliability of the scholarly record? Open dissemination means anyone can put out a paper or data instantly, but not all outputs are equal in quality.

The challenge is to develop new mechanisms of trust and quality assurance in an open system. This might involve open peer review (where reviews are published alongside articles), post-publication review (continuous evaluation by the community after an article is public), and improved metadata and indexing to help researchers filter the signal from the noise.

The replication crisis has already forced introspection on improving peer review rigor; open science practices like sharing data and code help here by making it easier for others to check work. Universities can contribute by training researchers in reproducibility and supporting initiatives that raise standards (such as data repositories and methodology training). Another concern is the rise of predatory journals – fly-by-night outlets with deceptive practices that sprung up to capitalize on the author-pays gold rush.

While open access itself is not the culprit (predatory publishers existed because some authors will pay for easy publication), their proliferation in the 2010s tarnished the image of open access. The community has responded with better due diligence (e.g. whitelists/indexes of reputable journals, and policies that only count publications in credible venues). Ongoing vigilance is required so that the collapse of gatekeeping does not result in a flood of unvetted science overwhelming the literature. In a sense, the function of gatekeeping must remain, but ideally distributed across the community rather than concentrated in a few journal brands.

Cultural Change and Buy-In: Changing entrenched academic culture is perhaps the hardest challenge. Senior faculty who built their careers on the old system may be resistant to new metrics or publishing practices. Junior researchers often face a double bind: they are told to embrace open science and not chase impact factors, yet when they apply for jobs or grants, they worry that reviewers will still look for those traditional signals.

This transitional phase requires strong leadership from universities and funding agencies to signal that they genuinely value the new modes. The more major institutions and consortia proclaim (and demonstrate) that they will not penalize researchers for publishing in non-traditional venues or for having a modest h-index, the faster the culture will change. The agreement by hundreds of organizations on reforming research assessment in 2022 is a promising sign.

But implementation on the ground—updated promotion guidelines, evaluator training, and so forth—is the critical next step. It’s also important to acknowledge the human element: many researchers feel anxiety amid these changes. The rules of the game are shifting, and with them, the sense of identity and success in academia. Universities should foster open dialogues and provide mentoring on how to navigate the new landscape (for instance, how to document and communicate one’s impact in narrative terms). If done inclusively, this cultural shift can be empowering: scholars can focus more on meaningful, creative work and less on playing the numbers game.

Technological Infrastructure and Accessibility: The vision of an open, interconnected knowledge network relies heavily on technology. Universities and the scholarly community must invest in robust infrastructure: interoperable repositories, persistent identifiers (like DOIs for articles, ORCID for authors), text and data mining tools, and perhaps even AI-assisted discovery systems to help manage the deluge of open information.

There is an opportunity to design systems that facilitate collaboration and discovery in ways the old system couldn’t. For example, imagine an AI-driven literature review assistant that can ingest all open papers on a topic and highlight insights, or a platform that automatically links datasets to publications and to subsequent reuse. These are within reach if openness is the default.

However, lack of investment or coordination could impede progress. This is why scholars like Brembs et al. call for a standards body under scholarly governance to ensure development of open infrastructure. The risk if we fail to do so is that new private entities could recentralize control (for instance, large tech companies or academic social networks that are closed) – a new kind of gatekeeper. It’s crucial that the community keeps control of the infrastructure that supports open knowledge, to prevent a drift back into monopolies.

Legal and Ethical Considerations: Finally, there are legal and ethical wrinkles. Intellectual property policies may need revision: faculty often sign away rights to publishers; in an open regime, universities might encourage retaining rights or using licenses like Creative Commons. There’s also the issue of academic piracy – projects like Sci-Hub emerged as illicit alternatives to paywalls. In a future where everything is legal open access, Sci-Hub would be unnecessary, but getting from here to there involves legal battles and interim solutions.

Ethically, the push for openness must also respect privacy and sensitive data concerns (for instance, open data in medical research must be handled carefully to protect patient confidentiality). And as metrics change, ethical behavior needs reinforcement: if altmetrics count media mentions, we must discourage hype and press-release science; if collaboration is valued, we must ensure fair credit distribution. The system should incentivize good science and good behavior.

In confronting these challenges, the academic community is learning and adapting. Importantly, none of these challenges are insurmountable—indeed, they are receiving active attention in ongoing policy discussions and experiments in scholarly communication. The equilibrium is shifting, but it requires deliberate course correction to avoid new problems. It is a time of both uncertainty and optimism, as the next section concludes.


VII. Conclusion: Toward a More Equitable Knowledge Ecosystem

The disassembly of traditional academic journals, subscriptions, and metrics is not an end, but a means to an end: the creation of a more equitable, accessible, and innovative knowledge ecosystem. We are witnessing the early stages of that transformation. In this forward-looking moment, it’s worth reflecting on what we stand to gain if we navigate the transition wisely.

Envision a scholarly world a decade from now: A young researcher publishes her findings immediately on a community-run platform, where peers worldwide can read and comment openly. Her work is evaluated not by the prestige of the outlet or a crude citation tally, but by a careful assessment of its rigor, the openness of her methods, and the significance of the insights for both science and society. She receives credit for sharing her data and software, which others have already reused to advance the field. When her university considers her for tenure, they review a rich portfolio: not only her articles (all freely accessible to anyone), but also her mentorship of students, her outreach activities (including a blog that communicates science to the public, which is read widely), and perhaps the policy changes her research informed.

In this world, knowledge flows more freely like oxygen through the atmosphere, invigorating innovation and cross-pollination across disciplines and borders (to borrow a metaphorical spirit from my environmental analogies). The carbon-like weight of paywalls and perverse incentives has been reduced, allowing the flame of curiosity to burn brighter.

This vision is admittedly idealistic, but it is increasingly attainable. The systems-level view shows positive feedback loops that can propel the change: open access leads to broader dissemination and faster discovery, which in turn validates the open approach and encourages more of it. Alternative metrics and evaluation, if done right, reward collaboration and transparency, which then foster better science that benefits everyone, reinforcing the value of those new metrics.

Universities, by aligning their incentives with societal needs (knowledge as a public good), can restore public trust in academic research and fulfill their enlightenment mission in the digital age. The ecosystem as a whole becomes more resilient and just: no region or institution is excluded from knowledge, and no researcher is solely defined by a single number or the title of a journal.

Of course, reaching a fully equitable knowledge ecosystem will require continued advocacy, experimentation, and perhaps a few missteps corrected along the way. There will be debates and differences in approach. Some fields might transition faster than others. But the momentum is clearly in the direction of openness and fairness. The COVID-19 pandemic provided a telling example: in the face of a global crisis, scientists and publishers tore down walls (papers about the novel coronavirus were made open immediately, data was shared in real-time) and innovation accelerated, leading to vaccines in record time. This showed the world what was possible when knowledge sharing was prioritized. The post-pandemic academic world is unlikely to simply revert to old norms without question.

In concluding, it’s important to highlight the human dimension once more. Academia is not just an abstract system—it is a community of people devoted to learning and discovery. The collapse of old structures can be disorienting, but it is also liberating. It opens the door for a healthier culture where collaboration trumps competition, where a diversity of contributions (teaching, research, public engagement) are valued, and where knowledge is not a commodity for the few but a commonwealth for the many. The journey toward a more equitable knowledge ecosystem is a collective one. Universities, publishers, researchers, students, and policymakers each have roles to play in rebuilding the ecosystem’s architecture.

The reward for success is immense: a scholarly enterprise that is more inclusive, innovative, and impactful. In such an ecosystem, the barriers that once separated disciplines, regions, and people from knowledge will be markedly lower. The atmosphere of academia, if you will, will be richer in the oxygen of accessible ideas and less polluted by the excess carbon of outdated metrics and paywalls. It is a future worth striving for. Thoughtfully and deliberately, we are rethinking the balance of our academic atmosphere, moving ever closer to a knowledge climate that benefits all.

Sources:

  • Brembs, B. et al. (2023). Replacing academic journals: A proposal for decentralized scholarly infrastructure. Royal Society Open Science, 10(7): 230206.
  • University of California Office of the President (2019). UC terminates subscriptions with Elsevier in push for open access.
  • Piwowar, H. et al. (2018). The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles. PeerJ, 6: e4375.
  • Hirsch, J. (2020). H-index, past and present: Evaluating scientific impact. Physics and Society, Jan 2020.
  • Woolston, C. (2021). Impact factor abandoned by Dutch university in hiring decisions. Nature 595, 462.
  • Koltun, V. & Hafner, D. (2021). The h-index is no longer an effective correlate of scientific reputation. PLoS ONE, 16(6): e0253397.
  • Curcic, D. (2023). Open Access Publishing Statistics. WordsRated (June 2, 2023).
  • Pourret, O. et al. (2022). Toward More Inclusive Metrics and Open Science to Measure Research Assessment. Front. Res. Metrics & Analytics.
  • University of California Academic Senate (2019). Statement on scholarly communication and open access.
  • Nature Index (2019). What’s wrong with the h-index, according to its inventor.

To view or add a comment, sign in

More articles by Dr Tieu-Tieu Le Phung

Insights from the community

Others also viewed

Explore topics