An AI-controlled warplane has taken a senior air force leader for a ride in a groundbreaking test flight over California.
Air Force Secretary Frank Kendall sat in the cockpit as the experimental F-16 jet, called Vista, flew lightning-fast manoeuvres at more than 550mph over Edwards Air Force Base.
It went nearly nose to nose with a second human-piloted F-16 as both raced within 1,000 feet of each other, twisting and looping to try to force their opponent into vulnerable positions.
Mr Kendall’s flight was a further statement of confidence in artificial intelligence after the first known combat between a human pilot and a fighter jet controlled by AI last month.
Thursday’s flight lasted an hour and the US Air Force hopes to have more than 1,000 of the AI-controlled jets in the coming years.
“It’s a security risk not to have it. At this point, we have to have it,” Mr Kendall said after he climbed grinning from the cockpit.
The pilots working on Vista want the first fleet to be ready by 2028 and say the programmes are learning so quickly that some are already beating human pilots in combat.
The idea is that unmanned aircraft could provide an advance attack on enemy defences and penetrate airspace without high risk to human pilots.
But the shift is also driven by cost, as the AI planes are smaller and cheaper to produce.
The US Air Force is still hampered by delays and cost overruns for the F-35 Joint Strike Fighter programme, which will cost an estimated $1.7 trillion (£1.35 trillion).
Meanwhile, China’s air force is on pace to outnumber the US and is also developing unmanned weapons – though there is no indication yet that it’s found a way to run AI tests outside a simulator.
‘Concerns over life-and-death decisions’
Vista’s operators, who have flown it around two dozen times since September, say no other country has a similar AI jet – where the software learns on millions of data points in a simulator then tests its conclusions on real flights.
The real-world performance data is put back into the simulator where the AI processes it to learn more.
Air force boss Frank Kendall was so impressed he said he’d trust it with deciding whether to launch weapons in war.
It’s a controversial take. Arms control experts and humanitarian groups are concerned that AI might one day be able to autonomously drop bombs without further human consultation and are seeking restrictions on its use.
Read more:
Meta’s AI tells Facebook user it has a child
Creating fake sexual images to become criminal offence
“There are widespread and serious concerns about ceding life-and-death decisions to sensors and software,” the International Committee of the Red Cross has warned.
Mr Kendall said there would always be human oversight when weapons are used.
The pilots programming Vista are aware they are potentially training their own replacements, but would also dread going up against an enemy’s AI fleet themselves.
“We have to keep running. And we have to run fast,” Mr Kendall said.