Search This Blog

Wednesday, December 16, 2020

AI Just Controlled a Military Plane for the First Time Ever

 


As reported by Popular MechanicsOn December 15, the United States Air Force successfully flew an AI copilot on a U-2 spy plane in California, marking the first time AI has controlled a U.S. military system. In this Popular Mechanics exclusiveDr. Will Roper, the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, reveals how he and his team made history.

For Star Wars fans, an X-Wing fighter isn’t complete without R2-D2. Whether you need to fire up converters, increase power, or fix a broken stabilizer, that trusty droid, full of lively beeps and squeaks, is the ultimate copilot.

Teaming artificial intelligence (AI) with pilots is no longer just a matter for science fiction or blockbuster movies. On Tuesday, December 15, the Air Force successfully flew an AI copilot on a U-2 spy plane in California: the first time AI has controlled a U.S. military system.

Completing over a million training runs prior, the flight was a small step for the computerized copilot, but it’s a giant leap for “computerkind” in future military operations.

The U.S. military has historically struggled developing digital capabilities. It’s hard to believe difficult-to-code computers and hard-to-access data—much less AI—held back the world’s most lethal hardware not so long ago in an Air Force not far, far away.

But starting three years ago, the Air Force took its own giant leap toward the digital age. Finally cracking the code on military software, we built the Pentagon’s first commercially-inspired development teams, coding clouds, and even a combat internet that downed a cruise missile at blistering machine speeds. But our recent AI demo is one for military record books and science fiction fans alike.

With call sign ARTUµ, we trained µZero—a world-leading computer program that dominates chess, Go, and even video games without prior knowledge of their rules—to operate a U-2 spy plane. Though lacking those lively beeps and squeaks, ARTUµ surpassed its motion picture namesake in one distinctive feature: it was the mission commander, the final decision authority on the human-machine team. And given the high stakes of global AI, surpassing science fiction must become our military norm.

Our demo flew a reconnaissance mission during a simulated missile strike at Beale Air Force Base on Tuesday. ARTUµ searched for enemy launchers while our pilot searched for threatening aircraft, both sharing the U-2’s radar. With no pilot override, ARTUµ made final calls on devoting the radar to missile hunting versus self-protection. Luke Skywalker certainly never took such orders from his X-Wing sidekick!

The fact ARTUµ was in command was less about any particular mission than how completely our military must embrace AI to maintain the battlefield decision advantage. Unlike Han Solo’s “never-tell-me-the-odds” snub of C-3PO’s asteroid field survival rate (approximately 3,720 to 1), our warfighters need to know the odds in dizzyingly-complex combat scenarios. Teaming with trusted AI across all facets of conflict—even occasionally putting it in charge—could tip those odds in our favor.

But to trust AI, software design is key. Like a breaker box for code, the U-2 gave ARTUµ complete radar control while “switching off” access to other subsystems. Had the scenario been navigating an asteroid field—or more likely field of enemy radars—those “on-off” switches could adjust. The design allows operators to choose what AI won’t do to accept the operational risk of what it will. Creating this software breaker box—instead of Pandora’s—has been an Air Force journey of more than a few parsecs.
dr jeannine abira, u 2 federal labratory director of advanced mathamatics and algorithim development left and dr jesse angle, u 2 federal laboratory technical director, work on a computer sep 21, 2020 at beale air force base, california the u 2 federal laboratory is a 15 usc compliant organization that promotes “edge development” a concept to develop new software integration on operational systems
Dr. Jeannine Abira, U-2 Federal Labratory Director of Advanced Mathamatics and Algorithim Development (left) and Dr. Jesse Angle, U-2 Federal Laboratory Technical Director (right), work on a computer Sep. 21, 2020 at Beale Air Force Base, California. The U-2 Federal Laboratory is a 15 U.S.C. compliant organization that promotes “edge development” a concept to develop new software integration on operational systems.
A1C LUIS A.RUIZ-VAZQUEZ
us air force gen mark kelly, right, commander of air combat command, and us air force command chief master sgt david wade, air combat command, receive a brief from u 2 federal laboratory staff about the organization’s stand up and recent projects, dec 4, 2020, at beale air force base, california
U.S. Air Force Gen. Mark Kelly, right, commander of Air Combat Command, and U.S. Air Force Command Chief Master Sgt. David Wade, Air Combat Command, receive a brief from U-2 Federal Laboratory staff about the organization’s stand-up and recent projects, Dec. 4, 2020, at Beale Air Force Base, California.
U.S. AIR FORCE PHOTO BY STAFF SGT. COLVILLE MCFEE

The journey began early in 2018, when I approved a hoodie-wearing Air Force team (fittingly named Kessel Run for a Star Wars smuggling route) to “smuggle” commercial DevSecOps software practices into our Air Operations Center. By merging development, security, and operations using modern information technology, DevSecOps produced higher-quality code faster and more continuously. Sounds perfect for a digitally-challenged Pentagon, right?

You’d think. Kessel Run bent all the rules and definitely “shot first” at the Pentagon’s fixation on five-year development plans with crippling baselines. As Han Solo advocated, keeping momentum sometimes required a good blaster at our side. Thankfully, Kessel Run’s results were game-changing, outpacing previous programs and inspiring a generation of Air Force and Space Force DevSecOps teams, including our U-2 FedLab.

"GIVEN THE HIGH STAKES OF GLOBAL AI, SURPASSING SCIENCE FICTION MUST BECOME OUR MILITARY NORM."

But coding effectively is only one element of trusted AI design. A year later, I directed a Service-wide adoption of coding clouds using landmark technologies containerization and Kubernetes. Containers virtualize and isolate everything code needs to run for Kubernetes then to orchestrate, selectively powering disparate software like a dynamic-but-secure breaker box.

us air force maj “vudu”, u 2 dragon lady pilot for the 9th reconnaissance wing, enters the cockpit while a 9th physiological support airman assists him at beale air force, california, dec 15, 2020
U.S. Air Force Maj. “Vudu”, U-2 Dragon Lady pilot for the 9th Reconnaissance Wing, enters the cockpit while a 9th Physiological Support Airman assists him at Beale Air Force, California, Dec. 15, 2020.

Running ARTUµ containers in our FedLab cloud also proved they would run identically on the U-2—no lengthy safety or interference checks required! This is how we get evolving software—especially AI—out of our clouds and safely onto planes flying through them.

Yet this trusted design didn’t create ARTUµ’s copilot abilities. You have to train for that. Like a digital Yoda, our small-but-mighty U-2 FedLab trained µZero’s gaming algorithms to operate a radar—reconstructing them to learn the good side of reconnaissance (enemies found) from the dark side (U-2s lost)—all while interacting with a pilot. Running over a million training simulations at their “digital Dagobah,” they had ARTUµ mission-ready in just over a month.

So my recent U-2 AI pathfinder—and military AI more generally—was really a three-year journey to becoming a software-savvy Air Force. But why not skip computerized copilots and wingmen and create a purely autonomous Force? After all, a computer won DARPA’s recent dogfight, and we’re already developing autonomous mini-fighters in our Skyborg program.

That autonomous future will happen eventually. But today’s AI can be easily fooled by adversary tactics, precisely what future warfare will throw at it.


us air force maj “vudu”, u 2 dragon lady pilot for the 9th reconnaissance wing, prepares to taxi after returning from a training sortie at beale air force, california, dec 15, 2020

U.S. Air Force Maj. “Vudu”, U-2 Dragon Lady pilot for the 9th Reconnaissance Wing, prepares to taxi after returning from a training sortie at Beale Air Force, California, Dec. 15, 2020.
A1C LUIS A.RUIZ-VAZQUEZ
Like board or video games, human pilots could only try outperforming DARPA’s AI while obeying the rules of the dogfighting simulation, rules the AI had algorithmically learned and mastered. The loss is a wakeup call for new digital trickery to outfox machine learning principles themselves. Even R2-D2 confused computer terminals with harmful power sockets!

As we complete our first generation of AI, we must also work on algorithmic stealth and countermeasures to defeat it. Though likely as invisible to human pilots as radar beams and jammer strobes, they’ll need similar instincts for them—as well as how to fly with and against first-generation AI—as we invent the next. Algorithmic warfare has begun.

Now if only we could master those hyperdrives, too.

No comments:

Post a Comment