Editor’s note: The applications of artificial intelligence — AI — are growing exponentially and will continue to do so as the technology advances even more. CNHI has been producing an ongoing series looking at AI and its potential benefits and concerns in various parts of everyday life. This fifth part in the series, which can be seen on The Transylvania Times website, looks at the military and AI.
In July, a test pilot flew out from the Florida panhandle accompanied by a wingman piloting an aircraft capable of traversing 3,500 miles and carrying missiles that could hit enemy targets from far away.
But the wingman wasn’t a person. It was an artificial intelligence system trained on millions of hours of military simulations.
The three-hour sortie of the XQ-58A Valkyrie demonstrated the first flight of an AI, machine-learning aircraft developed by the U.S. Air Force Research Laboratory, according to the Air Force.
The aircraft doesn’t need a runway. A rocket engine propels it into flight, and its stealthy design makes it hard to detect.
But its true distinction comes from its role as a “loyal wingman,” a recently coined military term for unmanned combat aircraft capable of collaborating with the next generation of manned fighter-and-bomber planes.
The Valkyrie has yet to see real-life conflict on the battlefield, but it marks a major step toward AI-supplemented warfare in which machines could have more autonomy than ever before.
That prospect is something to be embraced, argued Col. Tucker Hamilton, the Air Force’s chief of AI test and operations, in a Valkyrie demonstration video.
“We need to recognize that AI is here,” he said. “It’s here to stay. It’s a powerful tool. The collaborative combat aircraft and that type of autonomy is revolutionary and will be the future battle space.”
The test flight comes after other major demonstrations of the military’s adaptation of AI.
The Army in February revealed an M1 Abrams battle tank integrated with an AI-enabled target recognition prototype. The Navy in March announced a new AI-program called Project OneShip that uses machine learning to manage large volumes of data gathered daily by ships.
Deputy Secretary of Defense Kathleen Hicks in September painted an even more cutting-edge picture of future combat. She described pods of self-propelled, solar-powered aircraft packed with sensors to provide near real-time information. Similar ground pods could scout ahead to keep human troops safe.
The urgency for developing new AI tech comes from competition with China, Hicks explained. The country has spent the last 20 years building a modern military carefully crafted to “blunt the operational advantages we’ve enjoyed for decades,” she said.
Small, smart, cheap and versatile autonomous machines will play a major role in the military’s response to that threat. Aircraft like the Valkyrie cost around $4 million to produce — a fraction of the cost of top-tier bombers like the $737-million B-2 Spirit — making them expendable and easily replaceable.
They also protect human life. The team working on the aircraft’s AI system counted every military pilot killed over the decades due to human error, mishaps like terrain collisions or hitting other airplanes.
“Each one of those lives, that was a person that was loved by many people,” said Jessica Peterson, the technical director of the 412th Operations Group and a civilian flight test engineer. “So looking at future capabilities where the human doesn’t have to be at risk, that is a huge benefit for this community.”
KILLER ROBOTSMany aren’t thrilled with the prospect of unmanned, AI-fueled combat. An increasing number of experts warn the technology is littered with ethical concerns regarding its development and use.
In May, more than 180 experts and public figures signed on to a now-infamous statement from the Center for AI Safety that said “Mitigating the risk of extinction from AI should be a global priority.”
In 2018, United Nations Secretary-General António Guterres called for a ban on “killer robots” at the Paris Peace Forum. The European Parliament repeated its call for a similar ban in 2022.
“Imagine the consequences of an autonomous system that could, by itself, target and attack human beings,” Guterres said. “I call upon States to ban these weapons, which are politically unacceptable and morally repugnant.”
But fears of berserker military robots running amuck aren’t a real threat right now in the U.S., argued Noah Greene, a project assistant on AI safety at the Center for a New American Security, an independent nonprofit that develops national security and defense policies.
There are real concerns about AI-powered technology accidentally killing civilians or targeting the wrong enemy on the battlefield, he explained, but those concerns are just as valid for human troops.
“You don’t need AI-enabled systems or autonomous systems for people to make mistakes,” Greene said. “I really think people should fight against the urge to kind of yield to this idea that … the U.S. military’s use of AI is going to be hugely disruptive and hugely devastating.”
The DoD has signaled it’s taking the implications of AI seriously. In January, the department issued the first major update since 2012 to its “Autonomy in Weapon Systems” directive.
The policy provides guidance for Defense officials responsible for overseeing the design, development, acquisition and use of autonomous weapon systems, which must give commanders and operators “appropriate levels of human judgment” over the use of force.
The real conundrum now is how to develop and use trustworthy AI systems when rivals like Russia or China may be unlikely to adhere to the same values employed by the U.S., argued Bill Marcellino, a senior behavioral scientist at the research-group RAND Corporation.
“Our adversaries are going to use AI without any restraints on ethics,” he said. “How much of a competitive advantage are we willing to sort of swallow in the name of safety and control, because I guarantee you (China) and Russia don’t care about those kinds of things.”
BEHIND THE SCENESWhile debate roils around autonomous systems, the Department of Defense isn’t wasting time spending big money on developing new AI for its more routine tasks.
In fiscal year 2023, the DoD designated the tech as a top modernization priority and received $1.1 billion to adopt AI into its workforce development and data management. The department is asking Congress for $1.8 billion in the same funding in next year’s budget. Using AI to more efficiently operate day-to-day tasks and easily access data inside the military’s massive bureaucratic network saves time and money, argued Marcellino, who is developing new AI software for the Army to do just that for military contracts.
“Where AI can be transformative … is really getting a handle on what you’re spending, who’s spending it and where it’s being spent,” he said. “That would be really important to help the Army save a ton of money.”
The Army earlier this year also deployed a large language AI system, similar to ChatGPT, called Donvan inside its classified network to enable faster and more informed decision-making.
Donovan ingests real-time orders and situation and intelligence reports to help military staff with no training to easily understand and organize data. The system developed by Scale AI allows learning from human feedback to improve the technology, according to the company.
Marcellino said those kinds of AI systems could complete routine work and analysis in a matter of seconds that would take a person hours. Freeing up manpower allows the military to better tap into the time and talent of its people, he argued.
And that will be crucial as the military moves fast to develop and adapt new technology to compete with China, explained Deputy Secretary of Defense Hicks.
“The one advantage that they can never blunt, steal or copy — because it’s embedded in our people — is American ingenuity: our ability to … imagine, create and master the future character of warfare,” she said.