Playing with Nukes
Mustafa Suleyman's book warns that we must bring AI under the same controls as nuclear weapons
This is the fourth post on The Coming Wave: Technology, Power and the 21st Century's Greatest Dilemma, by Mustafa Suleyman. It was released Sept. 5, 2023. The first post, explaining why I’m reading it, is here. The second post, on why the coming wave is so, so big, is here. The third post, on the threat that the coming wave poses to the nation-state, our basic building block of civilization for the last several hundred years, is here. This fourth post, my final one, covers Suleyman's proposed solutions.
In April, I will be going through John Inazu's forthcoming book Learning to Disagree: The Surprising Path to Navigating Differences with Empathy and Respect.
In February, I went through Nick Troiano’s The Primary Solution.
In January, I went through Michael Wear’s The Spirit of Our Politics.
The book of the month schedule is here.
The day I published this post, Politico released an in depth reported article about the efforts to regulate AI, which you can read here.
There are many signs that Big Tech has moved firmly from the column of good to bad over the last decade in the public consciousness.
"Not long ago we looked to Silicon Valley as the place where dreams came from, but now it feels more like ground zero for the next dystopian nightmare," Ted Gioia wrote recently.
Gioia made that observation in a post about how an audience at South by Southwest booed a video promoting AI. The loudest boos came in response to the claim in the video that "AI fundamentally makes us more human."
Similarly author Jonathan Haidt remarked with some apparent surprise in a recent New York Times profile how receptive people have been to his latest book, a jeremiad against phone addiction, social media, and the Big Tech companies creating these things.
"It’s a regime that we all hate," Haidt said.
But hate is not enough to avoid the dystopian future that AI could bring about. And neither is regulation, according to Mustafa Suleyman.
The EU AI Act
Suleyman spends about 60 pages at the end of The Coming Wave walking through his approach to how we contain and constrain the coming changes from AI, advanced biotech, quantum computing, and robotics.
Regulation is a start, he argues. But it's not enough on its own.
"At least it's a start," Suleyman writes. "Regulations like the EU AI Act do at least hint at a world where containment is on the map, one where leading governments take the risks of proliferation seriously, demonstrating new levels of commitment and willingness to make serious sacrifices" (232).
Suleyman's book came out last fall, a few months before the EU passed the final version of the AI Act this year, but he said that the world's first major regulatory framework for this technology "is bedeviled with problems, sure, but there is much to be praised in its provisions, and it represents the right focus and ambition" (260).
But again, Suleyman makes an extended argument for something more ambitious and expansive than regulation alone. One of the big reasons for this is the speed with which AI is evolving, as governments and tech companies race one another for power and wealth.
Regulation alone will fail to keep pace with this breakneck pace. There must be something more robust.
Ten Steps Toward Containment
Suleyman proposes a multi-layered approach covering all the possible angles. He calls it "layers of the onion" (239) that build on and reinforce one another.
Here are the 10 steps:
Technical safety
Audits
Choke points
Makers
Businesses
Government
Alliances
Culture
Movements
Coherence
I won't go through all of these. This section alone is worth the price of the book. But I'll go through a few things that stood out.
Open-Source Has Been Good, but for AI, Not So Much
First, Suleyman takes a hard stance against making AI technology open-source, something that Meta did last year with its LLaMA AI program that powers chatbots.
"Open-source has been a boon to technological development and a major spur to progress more widely," Suleyman writes. "But it's not an appropriate philosophy for powerful AI models or synthetic organisms; here it should be banned" (277).
"If everyone in the world can play with nuclear bombs, at some stage you have a nuclear war."
In fact, Suleyman argues that with highly advanced AI, extreme precautions should be taken to keep it from escaping the lab in which it's being worked on. He compares it to a biolab.
"'Boxing' an AI is the original and basic form of technological containment. This would involve no internet connections, limited human contact, a small, constricted external surface. It would, literally, contain it in physical boxes with a definite location," Suleyman writes. "A system like this — called an air gap — could, in theory, stop an AI from engaging with the wider world or somehow 'escaping'" (241).
Similarly, Suleyman notes that while he was a "privacy maximalist" in his twenties, he now believes that we will need to "accept greater levels of oversight and regulation" on the internet, while also resisting moves to "complete surveillance" by government (277).
It's a "narrow path" approach, with danger on both sides.
We Need Way More Focus on AI Safety, and May Need to Mandate It
There were only about 300 to 400 AI safety researchers in 2022, he writes, compared to 30,000 to 40,000 AI researchers.
Suleyman calls for "an Apollo program on AI safety and biosafety" (242) similar to how the U.S. launched a national program to get mankind on the moon.
He suggests that one idea for legislation would be to "require that a fixed portion — say, a minimum of 20 percent — of frontier corporate research and development budgets should be directed toward safety efforts."
Government controls
Suleyman goes over the different choke points in supply chain, and suggests governments use those pressure points to slow down the pace of progress. The next five years are critical, he says, to slowing things down enough to give us a chance to catch up if we want to contain the wave.
He says that government should also employ top talent in the industry who "are compensated competitively with the private sector" (259). Good luck with that. There is too much anti-government sentiment in this country to make that realistic, I think.
The government should have a "secretary or minister for emerging technology" (260).
We need a much more "licensed environment" (261), he says.
"We don't let any business build or operate nuclear reactors any way they see fit ... [But] today anyone can build AI. Anyone can set up a lab."
We have to create a system where "only responsible certified developers" are given access to this technology, where a license requires them to submit to "clear, binding" standards, rules, record-keeping, reporting, and inspection.
Taxation will have to change
This was honestly one of the wildest ideas I read.
"Taxation also needs to be completely overhauled to fund security and welfare as we undergo the largest transition of value creation — from labor to capital — in history" (261) Suleyman writes. "If technology creates losers, they need material compensation. Today, U.S. labor is taxed at an average rate of 25 percent, equipment and software at just 5 percent."
These rations will need to reverse themselves, he believes.
Tech Needs to Develop a "Self-Critical Culture"
Finally, Suleyman compares the tech culture to aviation culture, and concludes that the latter has a healthy and "vigorous approach to learning from mistakes at every level" (268).
But especially when it comes to privacy or safety breaches, a "culture of secrecy takes over" in tech when something fails.
And more broadly, Suleyman calls for more modesty in tech. It's essentially a plea that Big Tech develop a "more wary, more curious" attitude about its work, stepping back from its "just-go-for-it 'engineering mindset'" (270).
Tech must adopt the same Hippocratic oath that has guided medicine: First, do no harm.
"Pause before building, pause before publishing, review everything, sit down and hammer out the second-, third-, nth order impacts ... Be willing to stop," he advises.
This might be the hardest challenge of all. As Gioia notes, tech leaders seem to still be "caught in some time warp. They think they are like Steve Jobs launching a new Apple product in front of an adoring crowd."
"Those days are gone," Gioia wrote.
From reading Suleyman's book, it sounds like much of our future depends on how much of the tech community wakes up to this fact, even as incredible wealth — often at the expense of the working class — stares them in the face, luring them to ignore it.
Great reading of what looks like a great book.
I currently work in the AI sector re: U.S. national defense, and I had no idea this book existed. I’m pretty sure I speak for my community.
I fully agree with your thought here, Jon:
He says that government should also employ top talent in the industry who "are compensated competitively with the private sector" (259). Good luck with that. There is too much anti-government sentiment in this country to make that realistic, I think.
I’d add that there is a big three-legged stool of broad trends supporting your argument: Silicon Valley sets the pace in AI development. China has no scruples like the regulatory regime in view here. DoD is an oral culture whose top AI buyers will buy the best of what’s around.
Dr. Craig Martell, DoD’s Chief Data and AI Officer (CDAO) knows has led DoD down a strong middle ground approach to require human-centered AI use cases. The black boxes of AI’s LLMs and other generative features not winning out, credit to him.
That means China, not regulation in the U.S., will likely set the pace for how nations are forced to use AI for self-defense, or risk letting Xi use it and China’s powerful Military-Civil Fusion strategy, to reshape the global order.
I wrote a 200+ word comment then backed off. Happy to post here or on my Substack to drive some dialog. Because we need it.