The Pentagon’s AI Chief Prepares for Battle

Credit to Author: Elias Groll| Date: Wed, 18 Dec 2019 20:34:49 +0000

Lt. Gen. Jack Shanahan doesn't want killer robots—but he does want artificial intelligence to occupy a central role in warfighting.

Nearly every day, in war zones around the world, American military forces request fire support. By radioing coordinates to a howitzer miles away, infantrymen can deliver the awful ruin of a 155-mm artillery shell on opposing forces. If defense officials in Washington have their way, artificial intelligence is about to make that process a whole lot faster.

The effort to speed up fire support is one of a handful initiatives that Lt. Gen. Jack Shanahan describes as the “lower consequence missions” that the Pentagon is using to demonstrate how it can integrate artificial intelligence into its weapons systems. As the head of the Joint Artificial Intelligence Center, a 140-person clearinghouse within the Department of Defense focused on speeding up AI adoption, Shanahan and his team are building applications in well-established AI domains—tools for predictive maintenance and health record analysis—but also venturing into the more exotic, pursuing AI capabilities that would make the technology a centerpiece of American warfighting.

Shanahan envisions an American military that uses AI to move much faster. Where once human intelligence analysts might have stared at a screen to identify and track a target, a computer would do that task. Today, a human officer might present options for what weapons to employ against an enemy; within 20 years or so, a computer could present “recommendations as fast as possible to a human to make decisions about employing weapons,” Shanahan told WIRED in an interview this month. Multiple command and control systems that track battlefield conditions are to be unified into one.

It’s not a vision for killer robots deciding who lives and dies. It’s more like Waze, but for war. Or as Shanahan put it: “As much machine-to-machine interaction as is possible to allow humans to be presented with various courses of actions for decision.”

The hurdles for implementing that plan are legion. The massive data sets needed to build those computer vision and decisionmaking algorithms are rarely of the necessary quality. And algorithms are only as good as the data sets upon which they are built.

Perhaps more profoundly, the military integration of intelligent computer systems raises questions about whether some realms of human life, such as the violent taking of it, should be computer-enabled. “That loss of human control moves us into questions of authorization and accountability we haven't worked out yet,” says Peter Singer, a defense analyst and coauthor of the forthcoming techno-thriller Burn-In.

"Twenty years from now we'll be looking at algorithms versus algorithms."

Jack Shanahan, JAIC

These ethical questions have exposed a divide within Silicon Valley about working with the Pentagon on artificial intelligence initiatives. Before he headed up the JAIC, Shanahan ran Project Maven, the computer vision project that aimed to take reams of aerial surveillance footage and automate the detection of enemy forces. Facing an employee uproar, Google pulled out of that project in 2018, but that hasn’t stopped the initiative from moving forward. Just last week, Business Insider reported that Palantir, Peter Thiel’s data analytics company, has taken over the contract.

The sheer size of Pentagon spending on AI—difficult to determine exactly but estimated at $4 billion for fiscal year 2020—makes it unlikely any of the tech giants will stay away for long. Despite having pulled out of Maven, Google executives maintain that their company would very much like to work with the Pentagon. “We are eager to do more,” Google senior vice president Kent Walker told a National Security Commission on Artificial Intelligence conference last month. Meanwhile, Amazon CEO Jeff Bezos is using the issue to distinguish his company as one that won’t shy from the controversy of taking on military work. “If Big Tech is going to turn their backs on the Department of Defense, this country is in trouble,” he said during remarks at the Reagan National Defense Forum earlier this month.

Bezos’s public embrace of the Pentagon comes as Amazon is challenging the award of a $10 billion cloud computing contract called JEDI, or the Joint Enterprise Defense Infrastructure, to Microsoft. That system will be key to Shanahan’s AI ambitions, giving him the computing power and the shared infrastructure to crunch massive data sets and unify disparate systems.

It was the lack of such a cloud system that convinced Shanahan of its importance. When he ran Maven, he couldn’t digitally access the surveillance footage he needed, instead having to dispatch his subordinates to fetch it. “We had cases where we had trucks going around and picking up tapes of full-motion video,” Shanahan says. “That would have been a hell of a lot easier had there been an enterprise cloud solution.”

To push updates to the system, Shanahan’s team similarly had to travel to physically install newer versions at military installations. Today, Maven is getting software updates every month or so—fast for government work, but still not fast enough, he adds.

But JEDI isn’t going to solve all of Shanahan’s problems, chief among them the poor quality of data. Take just one JAIC project, a predictive maintenance tool for the military’s ubiquitous UH-60 Black Hawk helicopter that tries to figure out when key components are about to break. When they started collecting data from across the various branches, Shanahan’s team discovered that the Army’s Black Hawk was instrumented slightly differently than a version used by Special Operations Command, generating different data for machines that are essentially identical.

“In every single instance the data is never quite in the quality that you’re looking for,” he says. “If it exists, I have not seen a pristine set of data yet.”

Data quality is one of the chief pitfalls in applying artificial intelligence to military systems; a computer will never know what it doesn’t know. “There are risks that algorithms trained on historical data might face battlefield conditions that are different than the one it trained on,” says Michael Horowitz, a professor at the University of Pennsylvania.

Shanahan argues a rigorous testing and evaluation program will mitigate that risk, and it might very well be manageable when trying to predict the moment an engine blade will crack. But it becomes a different question entirely in a shooting war at the scale and speed of which the AI has never seen.

The at times unpredictable nature of computer reasoning reasoning presents a thorny problem when paired with the mind of a human being. A computer may reach a baffling conclusion, one that the human who has been teamed with it has to decide whether to trust. When Google’s AlphaGo defeated Lee Sedol, the world’s best Go player, in 2016, there was a moment in the match when Lee simply stood up from his chair and left the room. His computer adversary had made such an ingenious and unexpected move (from a human perspective) that Lee was flummoxed. “I’ve never seen a human play this move,” one observer said of the move. “So beautiful.”

Imagine a weapons system giving a human commander a similarly incomprehensible course of action in the heat of a high-stakes conflict. It’s a problem the US military is actively working on, but one for which it doesn’t have a ready solution. The Defense Advanced Research Projects Agency is working on a program to come up with “explainable AI,” which aims to turn the black box of a machine-learning system into one that can provide the reasoning for the decisions it makes.

To build that trust, Shanahan notes commanders need to be educated in the technology early on. Projects using computer vision and satellite imagery to understand flooding and wildfire risks allow his team to learn by doing and build up expertise. “You have to understand the art of the possible or else it's all science fiction,” he says.

But key bureaucratic hurdles also stand in Shanahan’s way. A congressionally mandated report on the Pentagon’s AI initiatives released this week finds that the DoD lacks “baselines and metrics” to assess progress, that the JAIC’s role within the DoD ecosystem remains unclear, and that the JAIC lacks the authority to deliver on its goals. It also offers a dismal assessment of the Pentagon’s testing and verification regime as “nowhere close to ensuring the performance and safety of AI applications, particularly where safety-critical systems are concerned.”

In a statement, the Pentagon welcomed the report, which speaks to the immense challenges facing the US military in embracing a technology that it sees as integral to a possible conflict with Russia or China. “The speed, the op tempo of that conflict will be so fast,” Shanahan says. “Twenty years from now we'll be looking at algorithms versus algorithms.”

The US response to Beijing relies in part on automation. The Army is testing an automated gun turret. The Air Force is developing a drone wingman. The Navy’s “Ghost Fleet” concept is looking into unmanned surface vessels. To get faster, the Pentagon is once again turning to computers.

“The ultimate question we have to ask ourselves is what level of accuracy is acceptable for software,” says Martijn Rasser, a former CIA analyst and a fellow at the Center for a New American Security. “Let’s say a human being is correct 99.99 percent of the time. Is it fine for the software to be the same, or does it need to be an order of magnitude better?”

These are questions the Pentagon is exploring. An October report from the Defense Innovation Board laid out a series of principles for how the military might ethically adopt AI. Shanahan wants to hire an ethicist to join the JAIC, and he is at pains to emphasize that he is tuned into the ethical debates around military AI. He says he remains fundamentally opposed to what would be popularly thought of as “killer robots” and what he calls “an unsupervised independent self-targeting system making life-or-death decisions.”

He remains an optimist. “Humans make mistakes in combat every single day. Bad things happen. It's chaotic. Emotions run high. Friends are dying. We make mistakes,” Shanahan says. “I am in the camp that says we can do a lot to help reduce the potential for those mistakes with AI enabled capabilities—never eliminate."

https://www.wired.com/category/security/feed/

Leave a Reply