Cold War Analogies are Warping Tech Policy
Credit to Author: Justin Sherman| Date: Thu, 05 Sep 2019 13:00:00 +0000
Opinion: Politicians and pundits' fixation with flawed Cold War metaphors have produced overly combative policies on emerging tech.
Stock your bunkers, America, we’re back in the Cold War. Or many Cold Wars, it seems. Pundits and politicians alike declaim that we’re locked in a “new Cold War” with China, that we’re in the throes of a “cyber arms race” with the rest of the world, and that Russia’s election interference is, of course, today’s 1960s contestation over political ideology.
Justin Sherman (@jshermcyber) is a Cybersecurity Policy Fellow at New America.
These tempting, easy-to-understand Cold War metaphors pervade policy discourse around emerging technologies like artificial intelligence and quantum computing. Peter Thiel notably deployed such metaphors in his recent (quite flawed) New York Times op-ed about AI and national security. Despite asserting that a Cold War mentality “stopped making sense” years ago, Thiel goes on to describe US–China AI development as if it’s a zero-sum military arms race much like the one between 20th century America and the Soviet Union.
While seemingly innocuous, these kinds of faulty Cold War analogies have led to some plainly wrong thinking about tech policy. To be clear, there’s obvious instructive value in recognizing similarities between past and present. But to be instructive, the similarities need to be real—and with Cold War analogies and emerging technologies, they more than often aren’t. It’s time for policy wonks and technologists alike to ditch these wrongheaded fixations.
It’s understandable why policymakers, or anyone, would turn to the old and familiar to understand the new and scary. As technologies disrupt our everyday lives and contemporary geopolitics, it’s important to avoid unnecessary fear and confusion by looking to lessons from the past. For those who grew up during the Cold War, or those looking to glean lessons from it, those analogies may be comforting. We’ve been here before. But we haven’t. The Cold War simmered when the groundwork for the internet was barely emerging and televisions only had a few channels. Much of the whole world has been made over since, in large part by the very technologies we compare to the Cold War.
Analogies have documented value in problem-solving and policymaking, but they can also be dangerous. A Stanford study found that conceptualizing crime as a virus, for instance, lends itself to thinking about different policy solutions, like treating symptoms, than thinking about crime as a beast, which leads policymakers to approach it as a threat to forcefully put down, to eliminate. The framing of problems, their causes, and potential solutions is of vital importance in policy decision-making. Oversimplification and mischaracterization can therefore lead to bad policy.
That’s exactly what we’re seeing policymakers do with cyberspace technologies, artificial intelligence, and quantum computing—which is why we must apply far more scrutiny to comfortable historical analogies that mischaracterize reality.
Cyberspace has been compared to the Cold War for well over a decade, especially comparisons between weapon stockpiling and information conflict. While she was Secretary of State, for instance, Hillary Clinton criticized Chinese internet censorship with strong references to an “information Iron Curtain.” Noah Shachtman and Peter W. Singer thoroughly dismantled this misapplication of analogies back in 2011, writing for the Brookings Institution that with cyberspace, “the song is not the same and the historic fit to the Cold War is actually not so neat.” As they explained, from the nature of global cyber competition, which centers on companies and individuals as well as governments, to the barrier to entry into that competition (much lower online than with building nuclear missiles), the analogy doesn’t work. Nonetheless, Cold War comparisons to cyberspace persist, from CNN headlines to the mouth of chess champion Garry Kasparov. The allure of such analogies is apparently strong.
Artificial intelligence also regularly falls victim to Cold War analogies. Discussion of AI development, especially between the US and China, as an “arms race” or a new Cold War proliferate in op-eds, think tank reports, and the mouths of Trump administration officials. Yet AI tools (at least presently) can’t kill like a nuclear weapon, and the development of AI tools isn’t nationally isolated. With great interconnection between the US and Chinese technology sectors, science and technology research is anything but zero-sum. Moreover, AI capabilities are widespread in the commercial market and easily shared online—not exactly the case with ICBMs.
Analogies have documented value in problem-solving and policymaking, but they can also be dangerous.
More alarming is that arms race analogy has led some federal policymakers to over-focus on AI’s military applications, despite the dual-use nature of many AI technologies (i.e., simultaneous military and civilian utility). It has also led to other bad policy thinking, such as sweeping export control proposals from the Senate aiming to limit the spread of American AI tools—on the false premise of easily distinguishable military applications. Likewise, talk of US–China AI development as a "new Cold War" has led to bad strategic thinking around “decoupling” the US and Chinese AI sectors. If there are analogies to be applied from the Cold War to AI, these aren’t the ones.
I've also been at several workshops where policymakers compare quantum computing, which promises to enable greatly increased computing complexity, to nuclear weapon technology. As their logic goes, nuclear nonproliferation and counterproliferation efforts, which aimed to prevent the acquisition, spread, and retention of nuclear arms capabilities, could also be applied to quantum computing. This is presumably based on the fact that powerful quantum computers, alongside breakthrough-pushing in areas like chemical modeling, could potentially break all encryption on the internet. Security risks exist alongside potential economic gains.
Yet the analogy to nuclear weapons is again a mischaracterization. Quantum computers don't kill hundreds of thousands or millions of people when used. They're developed in corporate research labs and universities, not just secret government facilities. When quantum computers are tested, it’s not evident on the world stage in the same way as a nuclear explosion. Perhaps there is something to be learned from the Cold War here—like the value of preserving the US’ scientific and economic openness—but the aforementioned comparison again falls short.
Many policymakers take these Cold War analogies as hard truth, consequently misunderstanding everything from the global threat landscape to the ways in which particular technologies should be regulated. (These analogies cause problems well beyond tech, too, such as wrongly thinking that Beijing’s vision for global power is predicated on the US’ downfall.) Before any of us—journalists, policy analysts, technologists—start throwing around historical analogies to describe the latest tech, we ought to recognize the new and different alongside lessons from the past, and to remember the policy impact of these analogies.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.