Should we slow the development of AI?
In 1942, scientists at the Manhattan Project briefly feared that detonating the atomic bomb might ignite Earth’s atmosphere and destroy the planet.
Robert Oppenheimer brought in top physicists to examine the risk. It wasn’t an ethical debate—it was a physics problem. And fortunately, it had a clear answer: The laws of nature made such a chain reaction virtually impossible.
Physicists could calculate the outcome with confidence.
AI researchers face a different reality. The neural networks driving today’s most advanced systems are “black boxes”—even the experts can’t fully explain how they work or predict how they’ll behave.
We’ve entered a widening gap between the speed of AI development and our ability to understand and govern it.
Unlike an atomic bomb, AI also has the potential to be a tremendous force for good, such as accelerating cures for disease, helping reverse climate change, improving education, and unlocking solutions we haven’t yet imagined.
So how do we embrace the benefits while guarding against the risks? And would slowing down AI cost us breakthroughs that could save lives?
This is Part II of our exploration of sophisticated AI systems. In last week’s newsletter, we investigated AI agents. In this week’s newsletter, we explore the risks and the upsides of powerful AI, and what’s possible as we near Artificial General Intelligence (AGI).
Definitions: AGI & ASI
First, some definitions are necessary:
ASI might be a ways off, but AGI could be around the corner. Anthropic’s CEO, Dario Amodei thinks it could be as close as 2026. Sam Altman, CEO of OpenAI—a company whose stated mission is to ensure that AGI benefits all of humanity—has written that AGI is “coming into view.”
The risks of AI
Like the atomic bomb, some think that AI could pose an existential risk to humanity by either contributing to the creation of smart weapons or outmaneuvering their handlers.
Keep The Future Human, an initiative of the Future of Life Institute (FLI) (a Project Liberty Alliance Member) spearheaded by physicist Anthony Aguirre, explored AI's risks. The initiative outlines six areas of risk:
“The only way to avoid AGI’s risks is not to build it—at least, not until we are sure it’s safe,” the report said. The public agrees. In a July 2023 survey conducted by the AI Policy Institute, 82% of Americans said we should “go slowly and deliberately” in AI development.
Recommended by LinkedIn
The optimists' case for AGI
However, not building AGI means slowing its potential to be a transformational force for good. There are legitimate reasons to be optimistic:
How to keep the future human
There are likely many middle paths to harness the best of AI, without losing control of it. In the Keep The Future Human initiative, Aguirre proposes an alternative to AGI. He describes it as “Tool AI,” which avoids the risks of AGI by limiting an AI system’s autonomy, generality, and intelligence. Tool AI would be:
Aguirre outlines four practical measures to prevent uncontrolled AGI:
These measures would require a coordinated effort by policymakers and industry leaders to redirect AI away from its breakneck pace toward general intelligence and instead steer it into safer harbors of responsible innovation and development.
One thing AI is not good at: summoning the political will to pass policies regulating AI. This is partially because the development of AI has become a geopolitical race between nations. A country that self-regulates its use of AI might put itself at a disadvantage if other nation-states don’t follow suit (a phenomenon also observed with nuclear weapons). After the AI Action Summit in Paris last month, the Hard Fork podcast noted how AI safety had taken a back seat to AI opportunism.
The EU passed the first comprehensive legal framework for AI worldwide with its AI Act in 2024. The AI Act regulates AI along a spectrum of risk, while attempting to cultivate innovation (a tricky balance to strike).
In the U.S., the Biden Administration produced a symbolic (not enforceable) Blueprint for an AI Bill of Rights. In the new Trump Administration, it’s still unclear how AI policy might unfold.
The People's AI
Are autonomous, general AI systems yet another technology we will lose control of?
Is AGI the next step in a continuous progression where everyday people have less control over the technologies that rule their lives?
It’s possible to see something so intelligent and autonomous as a threat to our human voices and choices—principles that make up a big part of Project Liberty’s mission. Keeping such fast-moving technologies within the domains of our control is crucial for us to preserve our human rights, but it is a false dichotomy to conclude that we must choose between progress or safety. We need the full spectrum of our human imagination to see what’s possible.
Author of Assisting Intelligence. Sit. Stay Curious. Understand AI.
1moThe nuclear analogy in this post evokes the challenge of containing AI, as explored in The Coming Wave by Mustafa Suleyman. However, another, perhaps more immediate, alternative is investment—the intentional selection of companies and organizations committed to responsible AI development over careless people. Here on LinkedIn, we are the consumers, decision-makers, investors, and professionals who will shape AI’s future through the choices we make—who we support, the services we use, and the organizations we trust and partner with for responsible development.
Slowing down AI development now feels like trying to pause the internet in 1995. The race to AGI has both existential risks and transformative potential. What's interesting is that while we debate slowing down, the tech continues to accelerate with or without consensus. The market forces ($200B expected for agentic AI by 2034) suggest momentum will only increase. This isn't just about technology - it's about fundamentally reshaping how humans interface with knowledge and creation. There is not formal way to educate young people how to interact or develop critical thinking skills to evaluate AI ( which is needed to spot things like deepfakes or malicious propaganda) .... All humans need to fundamentally understand AI before being allowed to use it. #AIEthics #AGIDevelopment #ProjectLiberty