The U.S. Air Force’s development of a pilotless aircraft run by artificial intelligence (AI) has the potential to give American forces the upper hand in any conflict, but it also raises ethical questions about how such powerful technology should be deployed on the battlefield.
“This technology is something we’ll need for the future of defense,” Phil Siegel, an AI expert and the founder of the Center For Advanced Preparedness and Threat Response Simulation, told Fox News Digital.
Siegel’s comments come as the Air Force continues development of XQ-58A Valkyrie experimental aircraft, an artificial intelligence-run stealth platform that the U.S. hopes can provide a relatively inexpensive weapon that can be used to limit losses to manned planes and pilots in a conflict with a near-peer rival such as China.
The unmanned AI-run aircraft is being developed by Kratos Defense & Security Solutions, which was awarded a contract to develop the platform in 2016. The first successful test flight of the aircraft took place in March 2019, with the aircraft behaving “as expected” after completing a 76-minute test run.
Since then, development on the aircraft has continued, including a recent test that saw the plane fly in formation alongside an F-15E Strike Eagle, according to a report from Defence Connect. Proving that capability was an important step for the Air Force, which envisions the AI platform as being somewhat of a wingman for manned planes that can identify threats, engage targets and even absorb enemy fire for the manned aircraft, if necessary.
One of the advantages of the aircraft is its relatively cheap costs, with Kratos estimating each unit would cost roughly $4 million if they were to produce 50 planes annually, according to a report from The Drive. That price tag compares favorably to the MQ-9 Reaper drone, which is estimated to cost $30 million per unit, according to the Pentagon, while the F-35 fighters cost about $80 million per unit.
Steve Fendley, president of Kratos Unmanned, told Fox News Digital the company has developed an aircraft that comes at a price point well below what the U.S. government required while still delivering a platform that has “comprehensive capability.”
“The economy of scale is incredible, both from the cost in the first place of the unmanned system because you don’t have any of the expenses of life-sustaining systems and the reliability level that you need for a manned system versus and unmanned system,” Fendley said. “On top of that, because you have dispersed the capabilities, distributed the capabilities, you spread that cost over a much wider geometric space, which means the risk to the total is very, very low compared to the equivalent risk for [a] single manned system.”
“The ability to use assets more aggressively without the cost or risks of casualties will add tremendous capabilities to military planners,” Christopher Alexander, the chief analytics officer at Pioneer Development Group, told Fox News Digital.
Alexander says the XQ-58A is an “amazing proof of concept for a revolutionary Pentagon strategy that relies on less expensive solutions in conflict,” but he noted that use of the platform “also raises important ethical concerns.”
Military planners have shared concerns about how much autonomy a lethal weapon run by AI should be granted, a concern highlighted by the U.S. drone program, which has at times been criticized for its record of inflicting civilian casualties.
Alexander says those concerns have so far “been well managed by the Department of Defense,” and Siegel noted that further development will be needed if the role of AI-run aircraft expand from defensive to offensive capabilities.
“The key is what objectives do we put into the technology,” Siegel said, noting that the inexpensive platforms have so far concentrated their efforts on defending manned flight teams. “More expensive equipment and intelligence might be in other devices that have offensive and defensive capabilities. But the question then is how to weigh these objectives when it has multiple roles. Then it needs instructions on what to do when a battle goes outside its training space or what happens if its sensors or cameras are disabled and data is not available?”
While Fendley pointed out that Kratos partners with other vendors that develop the AI that will fly its aircraft, he believes sufficient safeguards have been put in place, noting that the systems in development will require human input before they are able to make certain decisions.
“There’s a lot of concern, and rightfully so, because I think in many cases our enemies are going to be much more free with what they allow the artificial intelligence agents to do, and I don’t think our country will do that,” Fendley said. “What’s very important to understand is having the capability doesn’t mean that you do it. It’s very easy to put in a system that could be capable of deploying weapons without asking anybody. It’s also very easy to have a constraint system in there.”
Nevertheless, military planners believe the need for such technology is growing if the U.S. is going to continue being capable of confronting near-peer adversaries such as China, which has invested considerable resources into air defense systems that could make it difficult for American forces to operate in a conflict without taking heavy equipment and human losses.
“We need to move in this direction of adding AI — both within equipment and as a help to our soldiers making decisions,” Siegel said. “Like all AI, we will need to both provide it performance objectives and give it instructions for both high-information and low-information scenarios. That’s the reality of battle.”