Is the U.S. Ready for the Next War?
Late this spring, I was led into a car in Kyiv, blindfolded, and driven to a secret factory in western Ukraine. The facility belongs to TAF Drones, founded three years ago by Oleksandr Yakovenko, a young Ukrainian businessman who wanted to help fend off the Russian invasion. When the war started, Yakovenko was busy running a logistics company in Odesa, but his country needed all the help it could get. Ukraine was overmatched—fighting a larger, wealthier adversary with a bigger army and more sophisticated weapons. “The government said to me, ‘We need you to make drones,’ ” Yakovenko told me. “So I said to my guys, ‘You have four hours to make up your minds. Leave or stay—and, if you stay, promise me that you’ll do your best to help our military.’ ”
Yakovenko’s task was to set up factories to mass-produce unmanned vehicles, designed to overwhelm whatever Russia sent across the border. When I visited his fab, as the plants are called, more than a hundred employees, many of them women, were working intently in a setting that seemed more college campus than munitions factory. With techno music humming in the background, they tended to 3-D printers, assembled carbon-fibre components, carried out flight simulations, adjusted video cameras and radio transmitters. “It’s quite meditative,” one of the women told me.
The TAF fabs are part of a constellation of similar facilities, hidden in basements, warehouses, and old factories, which have helped the Ukrainians battle the Russian Army to a stalemate. The one that I visited makes about a thousand drones a day. They are sophisticated and lethal and, above all, cheap, produced for about five hundred dollars apiece. Some are used for surveillance and some to ferry supplies, but most of them, laden with explosives and directed by an operator through a video screen, are crashed directly into their targets. One of Yakovenko’s managers showed me a fuzzy black-and-white video, taken in April, of a night operation behind enemy lines. Onscreen, a drone equipped with a thermal camera dived toward a TOS-1 rocket launcher, and then the screen exploded in a white flash. Russia builds TOS-1 units for about five million dollars apiece. “One of our drones costs a tiny fraction of what it destroys,” the manager told me. “That’s our advantage.”
When the Russian Army rolled into Ukraine, it was equipped for a conflict from an earlier era: an old-fashioned land war prosecuted by tanks and heavy artillery. In response, Ukraine devised a futuristic take on hit-and-run guerrilla operations. Now when a Russian column tries to advance it is met by a swarm of buzzing bombs. Russia has suffered about a million casualties in its attempt to invade. Since early 2024, according to an estimate by Mykhailo Samus, a researcher in Kyiv, about eighty per cent of its losses in men and matériel have been inflicted by drones.
The most dramatic application of this asymmetric approach came in June, when a fleet of more than a hundred Ukrainian drones struck targets as far away as Siberia, destroying or damaging some twenty Russian warplanes. It was the most militarily significant attack on Russia since the Second World War. The Ukrainians released a taunting video, in which first-person views of the drones careering into the planes were set to a pulsing techno soundtrack. The videos were stamped “Failsafe,” a military term that suggests immunity to harm.
While the future of warfare is being invented in places like Ukraine, U.S. officials are looking on with a growing sense of urgency. For decades, the American armed forces have relied on highly sophisticated, super-expensive weapons, like nuclear-powered aircraft carriers and stealth fighters, which take years to design and cost billions of dollars to produce. (The country’s failures in Iraq and Afghanistan were not for a lack of technical prowess.) Since the end of the Cold War, these munitions have given the U.S. near-total dominance on land, sea, and air. But now the technological shifts that have stymied the Russian invasion of Ukraine are threatening to undermine America’s global military preëminence. David Ochmanek, a former Pentagon official and a defense analyst at the Rand Corporation, told me that the American way of war is no longer viable. “We are not moving fast enough,” he said.
Throughout history, technological advantages have altered the course of wars, sometimes suddenly. In the late nineteenth century, railways displaced horses as a way of moving and supplying armies, and the Prussians exploited them to overwhelm their French opponents. In the first Gulf War, the U.S. used precision-guided cruise missiles that could be steered into an office window from a thousand miles away. The Ukrainians argue that they represent a similar technological vanguard. “We are inventing a new way of war,” Valeriy Borovyk, the founder of First Contact, whose drones carried out the strike on the Russian warplanes, told me. “Any country can do what we are doing to a bigger country. Any country!”
America’s best approximation of Oleksandr Yakovenko is Palmer Luckey, who helped found the defense startup Anduril in 2017. Not long ago, he met me at the company’s headquarters, in Costa Mesa, California, amid an array of high-tech weapons: drones, missiles, pilotless planes. Anduril is housed in a cavernous building that once contained the Orange County offices of the Los Angeles Times, whose faded logo is still visible on the exterior walls. At thirty-two, Luckey embodies the stereotype of a cocky, gnomic tech mogul: shorts and a Hawaiian shirt, flip-flops, a mullet and a soul patch. As we talked, he snacked from a bag of chocolate-chip cookies.
He wanted to show off his creations, autonomous weapons that he believes will upend many of the American military’s most cherished notions of strategy and defense. He walked over to a model of the Dive-XL, an unmanned submarine that can go a thousand miles without surfacing and is designed to be produced as quickly as an IKEA couch. “I can make one of these in a matter of days,” he said.
The U.S. military is accustomed to doing business with huge, entrenched players: companies like Lockheed Martin and Northrop Grumman that employ tens of thousands of engineers and military veterans in a culture not unlike the one inside the Pentagon. Luckey, by contrast, built an early career in video games and virtual reality. At nineteen, working from his parents’ home, in Long Beach, he created a V.R. headset called Oculus, a technology that he promised would “transport us into worlds we cannot hope to experience in real life.” He sold the company for two billion dollars to Facebook, whose founder, Mark Zuckerberg, brought him on to oversee the Oculus team. Their collaboration was brief. In 2016, following a controversy over a contribution that Luckey made to a pro-Trump group, Zuckerberg fired him. “I had a real chip on my shoulder,” Luckey said. “I wanted to prove that Oculus wasn’t a fluke.”
A few months later, Luckey met with Trae Stephens, a principal at Founders Fund, a venture-capital firm led by Peter Thiel, the billionaire investor and libertarian political activist. Thiel had helped found Palantir, which was transforming the American defense establishment by integrating computer operations and simplifying tasks like tracking and destroying enemy targets. At Founders Fund, he and Stephens were searching for fledgling companies that could bring the breakthroughs of the tech world to the military.
Luckey told me that his central insight with Oculus was to distinguish himself from competitors by focussing less on the headset’s mechanism and more on its software. Unlike hardware, software could be easily replicated and regularly updated, improving it quickly and at little extra cost. For generations, the U.S. military had fielded fantastically complex systems that ran on software Silicon Valley regarded as substandard and overpriced. Luckey envisioned cheap, mass-produced weapons whose main value lay in their operating system—in their brains, not their brawn. He began working at the juncture of weaponry and artificial intelligence, to devise systems that could accumulate data and then act on it. With machines to do the fighting, humans could be kept far away from the battlefield. The goal, as he has said, was to “turn warfighters into technomancers.”
Trae Stephens joined Luckey and two additional partners to form Anduril, with seed money from Founders Fund and other investors, including one of J. D. Vance’s financial ventures. The company’s name was taken from “The Lord of the Rings,” in which Andúril, a reforged sword, stands for the renewal of the civilized world in the fight against darkness. Luckey saw his work as part of a civilizational conflict. “I wanted to take people out of the tech industry and put them to work in national security, which actually matters,” he said.
In the showroom, Luckey stopped before the Fury, a pilotless jet designed to operate at g-forces that could flatten a human pilot against her seat. To prepare the Fury for dogfights against piloted planes, Anduril’s engineers were feeding it maneuvers from the Air Force’s Top Gun school. “We’re teaching this plane all the ways to get in a position to kill the other guy and come home alive,” Luckey said. “But the cool thing is, it’s not human—right?”
Anduril has secured billions of dollars in defense contracts, as the Pentagon has been swept up in a wave of enthusiasm for unmanned systems. But many questions remain, including the fundamental one of whether such weapons work as well as Luckey says they do. Even with the Pentagon pouring cash into experiments, the vast majority of the budget still goes to the same kinds of programs that it has been pursuing for decades. A growing consensus of defense experts holds that the United States is dangerously unprepared for the conflicts it might face. In the past, the country’s opponents were likely to be terrorist groups or states with armies far smaller than ours. Now planners must contend with considerably different threats. On the one hand, there is the prospect of insurgents who can field swarms of armed drones. One the other, there is the rise of China—a “peer competitor,” which by some measures has surpassed the U.S. as a military force. There is no guarantee that we have the right matériel to prevail against either. “Shit,” Luckey said. “We’re like a gun store with no stock.”
During the Second World War and the decades after, the American armed forces devised technologies far more advanced than anything made in the private sector. “The military produced an astonishing amount of innovation,” Bill Greenwalt, a fellow at the American Enterprise Institute and a former staffer on the Senate Armed Services Committee, told me.
Facing an existential threat, the Pentagon adopted a free-form procurement process, with senior leaders often assigning several contractors to make prototypes for a single weapon and then giving a contract to the most successful contestant. “The generals threw money at good people, broke furniture, and picked winners,” Greenwalt said. This unconstrained methodology helped lead to the first reconnaissance satellites, the first integrated circuits, the first atomic weapon. “The important thing to remember about the Manhattan Project is that there were multiple pathways to success,” Greenwalt pointed out. “It was incredibly competitive.” In 1949, Admiral Hyman Rickover was assigned to oversee an effort to use the newly harnessed atomic energy to power a submarine—an idea that many observers considered fanciful. Five years later, the first nuclear submarine entered service.
Over time, though, the process became more regular and rules-bound. In 1960, President John F. Kennedy appointed a new Secretary of Defense, Robert McNamara, who had built his reputation by bringing organizational discipline to the Ford Motor Company. Under the system he helped implement, weapons were conceived not by industry but by the Pentagon, where planners were typically following five-year prospectuses drawn up by other Pentagon planners. It usually took years to design a new weapon—and only once the specifications were agreed upon did the Pentagon solicit input from defense companies and finally select a contractor to produce it. The new system was more orderly, but it was also less competitive and far less dynamic. “We stopped innovating,” Greenwalt said.
We also, to a significant extent, stopped producing. During the Second World War, the U.S. armed not only its own military but its allies’, too. American shipyards built some six thousand ships, including more than a hundred and fifty aircraft carriers. Automobile factories were converted to war production; General Motors built Sherman tanks, and Ford built B-24 Liberator bombers. In the closing stages of the war, Ford plants could turn out a B-24 every hour.
When the Cold War ended, America’s defense-industrial base shrivelled. Without persistent demand from the Pentagon, some factories closed, and others produced barely enough weapons to stay open. Skilled workers migrated to other jobs; those defense industries which still existed, like shipbuilding, were short tens of thousands of employees. As a result, American shipyards are now capable of completing only one new submarine per year.
In July, 1993, Secretary of Defense Les Aspin and his deputy, William Perry, gathered the C.E.O.s of major defense contractors for dinner at the Pentagon. Aspin and Perry told the executives that in a few years most of their companies would not exist. The end of the conflict with the Soviet Union meant that the defense budget would be cut, and only a handful of contractors would survive; the rest would either be forced to merge or be driven into bankruptcy. The meeting has become known in the defense business as “the last supper,” and its essential message proved prophetic: within a few years, the number of “prime” defense contractors shrank from more than fifty to five. The industry wasn’t happy, but the Pentagon was the only buyer, so there wasn’t much that anyone could do about it. “The last socialist systems in the world are in Cuba and the Pentagon,” a former Senate staff member who dealt with the armed forces told me.
The current procurement system favors highly sophisticated weapons, usually made in small numbers. The F-22, widely considered the world’s best stealth fighter, costs three hundred and fifty million dollars per plane. The U.S.S. Gerald R. Ford aircraft carrier costs thirteen billion and takes as long as a decade to build. A single Tomahawk cruise missile, used to attack ships or radar installations, costs about two million. (Last month, a U.S. submarine launched two dozen Tomahawks at Iran in a single night.) Earlier this year, when two F-18 fighter jets slid off the deck of the U.S.S. Harry S. Truman in the Red Sea, some hundred and twenty million dollars’ worth of machinery sank to the ocean floor.
On top of that, many components of American weapons are outsourced to adversaries. In 2024, Govini, a software company hired by the Pentagon, traced supply chains for U.S. weapons and found that nearly forty-five thousand suppliers were based in China. Many produced essential parts, including semiconductors for the B-2 bomber, the Patriot air-defense missile, and the Ohio-class submarine, which carries nuclear missiles. “Of course, in the event of a conflict, the Chinese could cut us off,” Jeb Nadaner, a senior vice-president at Govini, told me.
The United States has even found it difficult to supply allies that are at war. When Russia launched its invasion, the Ukrainian military began requesting about five hundred Javelin antitank missiles a day. In the course of three months, the U.S. shipped over some seven thousand Javelins, about a third of its stockpile; at current production, it will take more than three years to replenish them. Likewise, the U.S. sent Ukraine a quarter of its Stinger anti-aircraft missiles. The missiles had been out of production, and to build new ones the manufacturer had to call back retired engineers, some of them in their seventies. Jake Sullivan, President Biden’s national-security adviser, told me that the attenuated state of America’s defense-industrial base was one of the most intractable problems he faced. Fixing it would take years, he said: “It’s a generational project.”
The combination of limited production capacity and expensive weapons sometimes constrained the government’s options. In March, President Trump vowed that the Houthis, an Iran-backed militia that was menacing global shipping in the Red Sea, would be “completely annihilated.” The Navy and the Air Force launched more than eleven hundred strikes, at a cost of at least a billion dollars in the first month. The Houthis, who sometimes operated out of speedboats and skiffs, kept on harassing ships. They shot down several American MQ-9 Reaper drones—which cost thirty million dollars apiece—and fired on two U.S. carriers. After seven weeks of fighting, they agreed to stop attacking American vessels, and Trump called off the campaign. But the Houthi force remains largely intact, and has attacked ships from other countries. Even this brief engagement left senior Pentagon officers worried that they had dangerously depleted the country’s stores of weapons.
Earlier this year, a group of Ukrainian officers stood in the lobby of a civilian building in Kyiv. Among them was Kyrylo Budanov, the country’s head of military intelligence—a hulking, baby-faced figure, instantly recognizable even though he was partly masked. He and his colleagues had gathered to boast. About a week before, a pair of Magura V7 pilotless attack boats had ventured into the Black Sea and shot down two Russian Su-30 fighter jets. It was the first time in history that combat aircraft had been shot down by maritime drones, the Ukrainians said. One of Budanov’s officers, a masked man who went by Thirteen, stepped forward and spoke through an electronic device that scrambled his voice. He pointed to a Magura V7 that had been wheeled in for the occasion: a sleek, low-slung craft made of fibreglass and polyethylene. It looked like a miniature speedboat with missiles attached. “The Ukrainian intelligence service has made a revolution in war in the sea,” he said.
As the conflict began, Russian warships roamed the Black Sea from their base in Sevastopol, a Ukrainian port captured in 2014. Ukraine hardly had a navy. When Russia blockaded the port of Odesa, a crucial outlet for grain and other agricultural commodities, it threatened to devastate an already battered economy. “We were desperate,” Thirteen told me.
Ukraine began attacking Russian naval vessels with missiles and aerial drones, and struck the Sevastopol base. Around the same time, it implemented two parallel programs to launch a fleet of naval drones. Group Thirteen, a newly created military-intelligence unit, oversaw the making of the Magura, a fast, maneuverable craft that would go after ships at sea. The country’s counter-intelligence agency put forth the Sea Baby, designed to carry heavier payloads and strike such targets as bridges and ships in harbor. With ranges of more than five hundred miles, the two could threaten adversaries almost anywhere in the Black Sea.
Ukraine released them into service, and, in the course of a few weeks in early 2024, swarms of Magura drones sank three Russian warships—the Ivanovets, the Tsezar Kunikov, and the Sergey Kotov. The rest of Russia’s Black Sea fleet soon retreated from Sevastopol and began dispersing from Novorossiysk, on the eastern shore. This March, the Russians agreed to a ceasefire in the Black Sea. “They didn’t have a choice,” Thirteen said.
At the beginning of the war, Ukraine used drones mostly for reconnaissance. But, as they showed their worth as weapons, their use expanded. Last year, by some estimates, Ukraine’s factories turned out more than three million drones. The key to successful operations, TAF workers told me, was that the manufacturers of the drones and the soldiers using them were in the same place, allowing the software and components to be continually tweaked. The drones that I examined were remarkably simple: a lightweight square frame, four propellers, a video camera, a battery-powered motor, and room for a bomb. The attack drones, known as F.P.V.s, for “first-person view,” are guided by an operator watching a video screen that shows what the drone is seeing; other members of the unit monitor feeds from reconnaissance drones. Yakovenko described a recent attack in which a Ukrainian pilot crashed his drone into a Russian tank, forcing the crew inside to flee. Other F.P.V. drones chased down the Russian soldiers. “We killed all of them,” he said.
The Russians are terrorizing the Ukrainians with drone attacks of their own. Towns and hamlets have been largely pulverized along the front lines and for miles beyond; even American air defenses are mostly useless, because setting them up invites an immediate Russian attack. Iranian-made Shahed drones, capable of carrying large warheads long distances, have pummelled Kyiv and other cities with hundreds of strikes. Under the constant threat of attack, the Ukrainians have found it difficult to supply their front lines, and evacuation is sometimes impossible.
The prevalence of drones appears to have given the advantage to the defense. Along the seven-hundred-mile front, soldiers on both sides are huddled in fortified trenches, separated by a no man’s land known as the “gray zone.” With drones circling day and night, surprise attack is impossible, movement suicidal. If soldiers venture out, they are attacked immediately by drones or artillery.
Prevented from breaking through the lines, the Russians have lately employed a desperate maneuver. Soldiers race motorcycles across the no man’s land and then jump off and try to hold whatever territory they can—a tactic that summons Putin’s dictum “Wherever a Russian soldier steps foot, that’s ours.” Casualty rates are extremely high. Samus, the researcher, marvelled at the disregard Russian commanders had for their men’s lives. “The Russian mentality—there is nothing like it in the West,” he said. “It is something different.”
The challenge Yakovenko faces is evading Russia’s efforts to jam radio frequencies used to control his drones. “The Russians are the champions of jamming,” he said. The latest TAF drones are able to hop to a new frequency if the one in use is jammed. But the Russians can change channels, too. “It’s a game of frequencies,” Yakovenko said. He told me that only about thirty per cent of his drones make it through Russia’s defenses, but the low hit rate doesn’t bother him: he figures that they have destroyed thousands of targets. “Many kills,” he said.
Russian electronic warfare tends to be most effective against Ukrainian drones that are within a mile of their targets. So Yakovenko has lately begun to equip drones with thermal sensors, which take command of the weapon as it nears its mark. His sensors aren’t yet accurate enough—“You have to hit the tank in a special place, like the gas tank,” he said—but they are improving.
Last fall, Russia launched its own anti-jamming technique, deploying drones controlled by long fibre-optic cables that ran all the way back to their bases—essentially, deadly kites. The cables are cumbersome. They get tangled up in trees and power lines; one image from the front shows a street crisscrossed by what looks like a giant spiderweb. But they’ve been effective, and Ukraine has struggled to defend against them.
The Ukrainian government has created a contest in which fighters upload videos of their drone strikes in exchange for points: six for a Russian soldier, forty for a tank, up to fifty for a battery of rockets. Successful units can use points to buy more drones in an online market; the government gets a vast library of videos with which it can refine its strategy.
As each side races to out-innovate the other, Yakovenko says he is not especially concerned that the Russians will come up with some transformative advance overnight. “It is a war of small steps,” he told me. “We find some solution, our enemy finds some solution.” His greater worry is that he won’t be able to procure necessary parts, like thermal cameras. Most of them are made in China, which is helping to arm Russia against Ukraine, and has begun clamping down on such exports. Corruption and betrayal abound along the way. “A lot of people try to cheat you, because you’re under pressure,” he said.
The recent Ukrainian drone attacks on the Russian warplanes marked a striking advance in the arms race: a combination of human subterfuge and precise tech work. More than a hundred drones were smuggled into Russia in pieces and assembled there. A phony businessman arranged for them to be loaded onto cargo trucks, without the drivers’ knowledge. Deep inside Russian territory—as far as twenty-five hundred miles from the border—the drones flew out and struck.
The effects were devastating, crippling about a dozen long-range bombers that were equipped to carry nuclear weapons. Borovyk, whose company made the drones, told me that the key was the element of surprise. Russia hadn’t anticipated drone strikes so far from the border, and had no time to put jamming systems into place. “They were not prepared for that type of attack,” Borovyk said.
Ukraine’s fighters have not yet been able to regularly deploy autonomous drones—the kind that can find targets without human help—but they are getting closer. Some of Borovyk’s drones were steered manually, but others were equipped with A.I. technology that could help them find their marks. According to reports in the Ukrainian press, the A.I. had been trained to recognize targets using images of old Soviet warplanes on display in an aviation museum east of Kyiv.
When Palmer Luckey began tinkering in a camper in his parents’ driveway, the kind of rapid innovation that is flourishing in Ukraine was almost unthinkable in the American defense establishment. Silicon Valley was producing a string of technological breakthroughs, but its leaders shied away from working on defense projects. The reasons were partly ideological—the tech business retained some of its roots in the seventies counterculture, which was revolted by the Vietnam War. But mostly the hesitation was pragmatic: the Pentagon’s development process was so slow that it typically took contractors years to receive any money. Many big Silicon Valley companies weren’t willing to wait, and smaller ones couldn’t afford to. Meanwhile, the technology that the Pentagon developed on its own often became obsolete before a weapon was even deployed. “By the time the F-35 came out, some of the microprocessors it used were slower than an iPhone,” a former Pentagon official who worked on tech issues told me.
In 2015, Ash Carter, President Obama’s Secretary of Defense, set out to bring the two communities together. Carter, who had a doctorate in theoretical physics, dispatched a team of officers to the Bay Area to set up an outpost—officially called the Defense Innovation Unit, but known at the Pentagon as Unit X. Its job was to find fledgling technology companies with interesting ideas and give them contracts. One of Unit X’s first initiatives was to do an end run around the Pentagon’s procurement process. By invoking an obscure paragraph buried in a budget-authorization bill, it was able to award contracts to companies as soon as they completed a successful pilot program. “Our goal was to shrink the Pentagon’s contracting process from ten years to six weeks,” Chris Kirchhoff, a founder of the unit, said. “We were able to do that.”
The Pentagon was also under pressure from Silicon Valley, which increasingly regarded itself as a rival power center to the government. In 2014 and 2016, the tech companies SpaceX and Palantir sued the government, claiming that it prevented private firms from competing for contracts; the companies argued that they could offer products at much lower costs. Both prevailed, and went on to receive billions of dollars’ worth of federal contracts, clearing the way for others.
As the Pentagon was opening up, Palmer Luckey got fired from Facebook and started Anduril. Among the first ideas that he brainstormed with his co-founders was an A.I. system that, by synthesizing enormous amounts of data, could learn to identify objects and track them in real time. Once it locked on, it could guide a mass-produced, disposable weapon to strike the target nearly anywhere on earth. They named the system Lattice, and a few months later they won their first government work: a contract, for U.S. Customs and Border Protection, to use Lattice in towers that tracked people moving across the U.S.-Mexico border.
The system worked, and Border Protection soon bought more. But Luckey believed that the ideal client for Lattice was the Pentagon. He explained to me that if the military needed to track a Chinese destroyer across the Pacific, Lattice could provide a real-time picture of the ship, using data from more than a hundred sources—a mix of classified and public channels that included geospatial satellites, ship beacons, radar, signal intercepts, and thermal sensors. With precise targeting, the military could sink the destroyer with a much smaller, cheaper missile than the ones it was using. “I can tell you, not only is that a Chinese destroyer, I can tell you which one it is—it’s a Luyang destroyer!” Luckey said. “I can tell you that because of the particular equipment it is configured with. And I know that, to achieve my objective—mission kill—I need to target either the bridge or its radar. I can put a missile right there.”
Rather than wait for military leaders to announce the kinds of weapons they needed, Anduril’s engineers would build sophisticated devices and offer them to the Pentagon. If the generals wanted something slightly different, Luckey’s team could simply rewrite the code. The weapons themselves would be little more than shells for software, making them much easier to build. “Our cruise missile has fifty per cent fewer parts than what the military uses now, and it can be put together with ten simple hand tools that I can put in a small bag,” Luckey said.
In 2018, with most of their ideas still inchoate, Luckey and Stephens walked into the office of Christian Brose, the defense adviser to Senator John McCain, who was then the chairman of the Senate Armed Services Committee. At the time, Anduril was a startup with twenty-five employees, hoping to break into the defense business. Brose, like his boss, had grown deeply frustrated with the Pentagon. He quickly realized that he and Luckey had aligning objectives. Later that year, after McCain died of cancer, he joined Anduril as the head of defense strategy.
The way Brose saw it, the Pentagon had to be transformed. Not only did it need a new strategy; it also needed to supplant many of its most coveted weapons. “The U.S. used to have a system that worked, but it’s broken,” he told me. “We spend a ton on defense, but if we don’t change we’re going to lose the wars of the future.” The war that worried him and his peers most was with China.
Earlier this year, at the Center for Strategic and International Studies, in Washington, a dozen or so experts gathered to conduct a simulated war between the United States and China over the island of Taiwan. Though most discussion about such a conflict centers on an all-out Chinese invasion, the C.S.I.S. war game was built around what many observers regard as a more likely scenario: a blockade, designed to box out the American Navy and squeeze Taiwan into submission.
The experts split into teams representing the U.S. and China, and each side was armed with the weapons that its country is thought to possess. As the game began, a crisis was already under way; China had encircled the island, and its sailors had sunk ships that attempted to run the blockade. U.S. forces announced that they would protect Taiwanese vessels, and American and Chinese ships began exchanging fire.
The scenario felt alarmingly plausible. In 2021, President Biden broke with decades of “strategic ambiguity” by publicly committing the United States to Taiwan’s defense. Biden called America’s support for Taiwan “sacred”—but the island also produces the world’s most sophisticated microchips, which are considered essential to the global economy. Although President Trump has been less declarative, Defense Secretary Pete Hegseth recently warned China that any attempt to conquer Taiwan would have “devastating consequences.”
From the game’s opening moves, the conflict escalated rapidly. The Taiwanese Air Force began attacking Chinese ships and mining the Taiwan Strait, and Chinese warplanes struck ports on the island and shot down two American planes. The U.S. retaliated by sinking a squadron of Chinese warships in harbor. After China fired ballistic missiles at American bases in Japan, the fighting exploded, with the U.S. launching massive strikes against the mainland and Japanese jets attacking Chinese ships. China’s missiles sank three aircraft carriers—drowning as many as fifteen thousand sailors—and destroyed a quarter of the American Air Force.
By the time the game was stopped, each side had lost tens of thousands of people. Seth Jones, a C.S.I.S. president who took part in the game, seemed taken aback by the ferocity of the fighting. “I’m surprised how rapidly things got out of control,” he said. Still, it could have been worse. The Chinese didn’t strike the U.S. mainland, as they do in other war games. In some simulations, the two countries have traded nuclear assaults, with hundreds of thousands of casualties.
American officials say that they are racing to keep this kind of scenario from becoming real. According to U.S. intelligence, Xi Jinping, the Chinese leader, has told his military to be ready to seize Taiwan by 2027. The People’s Liberation Army has been testing the hardware it would need to undertake an invasion, including landing barges that appear designed for beaches that, like Taiwan’s, have shallow approaches. The Chinese Navy and Air Force have been sending planes and vessels on sorties around the island. These “aggressive maneuvers around Taiwan right now are not exercises, as they call them,” Admiral Samuel Paparo, the head of U.S. forces in the region, said this February. “They are rehearsals.”
Some observers argue that each side is responding to the other’s rhetoric, and the escalation represents a game of mutual deterrence. But credible deterrence requires a force that can hypothetically defeat your opponent’s. In the past decade, the People’s Liberation Army has grown rapidly in capability and sophistication, in a way that seems designed to thwart the United States. As America’s Navy has steadily shrunk, China’s has grown to surpass it in size, though not yet in tonnage. (China’s shipbuilding capacity is more than two hundred times that of the U.S.) Where the U.S. has formed its naval strategy around nuclear-powered carriers, China has built hundreds of anti-ship missiles, known colloquially as “carrier killers.” Some can reach speeds of more than seven thousand miles an hour, and are capable of evasive maneuvers that make them nearly impossible to intercept. What’s more, China is deploying surface-to-surface missiles of sufficient range and number to destroy American bases in Guam, the Philippines, and Japan. “The scale of the Chinese buildup is amazing,” Tom Shugart, a fellow at the Center for a New American Security and a former submarine commander, told me. “There hasn’t been anything like it in peacetime since the Cold War.”
In August, 2023, American defense officials launched the Replicator initiative, a crash project to mass-produce air and sea drones that could deter Chinese military action in Asia. At the same time, commanders across the armed forces are racing to update their arsenals. The Air Force is advancing a program to have as many as five unmanned craft accompany each manned fighter plane; Army leaders have mandated that each division have a thousand drones.
“We are sending thousands of drones to Taiwan and the Pacific,” a former senior defense official told me. As Pentagon leaders see it, the key is to get unmanned systems ready to go immediately. “I want to turn the Taiwan Strait into an unmanned hellscape,” Admiral Paparo said last year, “so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything.” The Pentagon intends to have a substantial deployment complete by 2027. But Paparo and other commanders worry that the Chinese will seize Taiwan before then, presenting the United States, some seven thousand miles away, with a fait accompli. “If the Chinese decide to invade, things would get ugly quickly for both sides,” David Ochmanek, of the Rand Corporation, told me.
In crucial respects, the vision sketched out by American officials resembles Ukraine’s effort against Russia: flooding the skies and seas with inexpensive drones in order to thwart a larger adversary fighting near its homeland. In other ways, it is vastly different. Ukraine’s front extends for seven hundred miles; the western Pacific is millions of square miles, and any conflict around Taiwan would be fought not just on land and in the air but also on and under the sea.
Christian Brose, of Anduril, told me that the fleet of unmanned systems would be guided by what he calls “the kill chain.” If a Chinese invading force embarked on the Taiwan Strait, U.S. satellites would detect ships and relay their locations to targeting systems, which would guide the drones. “This needs to be happening very fast, again and again and again,” Brose said.
Brian Schimpf, Anduril’s C.E.O., told me that the Chinese would try to sever communications with his missiles, just as the Russians are doing with drones in Ukraine, by jamming radio signals trying to guide them. The Americans, of course, would be doing the same to the Chinese. “Everyone is doing it,” he said. “We are. The Chinese are. ‘How do I create confusion for the other side? And how do I mitigate the confusion they are going to create for me?’ ”
Every Anduril weapon is built to be capable of operating independently of humans. “We’re assuming all the weapons are going to be cut off from us,” Luckey said. But how autonomous would the American weapons be? Would they fire on command? Or fire on their own? Michael Horowitz, a former senior official who helped oversee artificial-intelligence policy for the Department of Defense, said, “Ukraine has shown us that in future wars we have to expect that data links with all sorts of platforms and systems will be severed. Autonomy lets you solve that issue.”
Schimpf sketched a scenario: American satellites detect Chinese warships and notify U.S. forces, which fire a volley of, say, forty cruise missiles. Once the missiles were launched, the Chinese would almost certainly jam the radio navigation system. But A.I. would take over: each missile would select one of the Chinese ships to strike and inform the other missiles of its intention. “I can say to the missiles, ‘Go look for ships,’ ” Schimpf said. “And they’ll find the ships.”
That’s the idea, at least. Schimpf expects the Chinese to use an array of tactics to throw the missiles off course, including deploying thousands of decoy targets in the water and in the air. Even navigating the vast stretches of the Pacific, where the missiles won’t have any landmarks to guide them, presents a challenge. “Over the water, it’s much harder,” he said.
Defense experts I spoke to were enthusiastic about Anduril’s ideas. But several current and former officials said that, even if the U.S. military had the weapons in the Replicator initiative, it is woefully incapable of using them. The kill chain that Anduril imagines requires rapid, intricate orchestration of satellites and sensors for reconnaissance, data collection, and targeting. But those satellites are controlled by myriad federal agencies, which, according to knowledgeable insiders, are too fiercely independent to coöperate smoothly. A former Senate staff member who worked extensively on these issues expressed deep frustration. “The Air Force won’t work with the Navy,” he said. “The Army won’t work with the Air Force. The N.S.A. won’t work with anybody. The National Reconnaissance Office won’t work with anybody. The National Reconnaissance Office and the National Geospatial-Intelligence Agency are both supposed to work with the N.S.A.—and they won’t talk to each other.”
The branches of the military, which maintain their own communication networks, have their own obstacles. Navy ships typically cannot communicate directly with Air Force jets, even when they are operating in the same theatre. Even within the Air Force, many planes cannot talk to one another; the pilot of an F-22 fighter jet can’t communicate directly with the pilot of an F-18. “If you flew the two aircraft next to each other, the only way the pilots could communicate would be to wave to each other,” the retired Air Force general Scott Stapp, who spent several years working on such concerns as a senior Pentagon official, said.
Experts see the issue of “joint command and control” as one of the military’s biggest, most underpublicized problems. The former Senate staffer imagined what might happen during a crisis in the Western Pacific. A satellite could detect a radio signal sent by what the N.S.A. believes is a Chinese warship. To make a precise identification, the N.S.A. would need the National Geospatial-Intelligence Agency, which oversees imaging satellites, to take a photo. “You have to make the request through a tasking mechanism,” the former staffer said. “And then it gets shipped over to the N.G.A., to take a picture of this ship. That can take several minutes. There’s a war going on, and you’re asking yourself, ‘Do I have to shoot this thing?’ But by then the ship has moved.” He continued, “It has to be boom, boom, boom, boom. And people have to be making split-second decisions, and you have to get the latencies down, because it’s not just one fucking ship but hundreds of targets, all at the same time.”
For two decades, senior legislators and military leaders have been working, mostly without success, to overcome these problems. In Pentagon jargon, the goal is known as Joint All-Domain Command and Control, a term that has become so familiar that it has acquired a shorthand—JADC2. “It’s not a technology issue,” Stapp, the former Air Force general, said. “It’s a cultural issue. The commercial world solved these kinds of problems years ago, and we have made the choice to run on separate networks with separate capabilities.”
The prospect of armed drones limited only by the capacities of artificial intelligence raises a disturbing question: Could they escape our control? Ever since humans began to dream of intelligent machines, they have feared that their creations would turn on them. In “R.U.R.,” Karel Čapek’s play from 1920, androids created to do humankind’s drudge work rise up and wipe out their makers. In “The Terminator,” from 1984, an A.I. defense system called Skynet becomes self-aware and triggers a nuclear war. This year’s “Mission: Impossible” sequel has basically the same theme: a rogue A.I. known as the Entity seizes control of nuclear weapons and comes within a tenth of a second of obliterating life on earth.
Similar warnings have come from more sober sources. Demis Hassabis, a prominent A.I. innovator at Google, has warned, “A bad actor could repurpose those same technologies for a harmful end.” Yet the Pentagon seems more concerned with making A.I. systems work effectively. Under a 2012 Defense Department order updated by Biden and left intact by Trump, the military may employ autonomous systems as long as they succeed in tests and their use is consistent with international humanitarian law.
The most prominent real-time laboratory for using A.I. in warfare is in Israel. When Hamas-led fighters crossed the border on October 7th, 2023, and launched a bloody assault, hundreds of thousands of Israelis were called to military duty. Among them was a technology entrepreneur from Tel Aviv, who asked me to refer to him as Michael. For four months, Michael told me recently, he commanded a group of sixteen targeters for the Israel Defense Forces, taking advantage of powerful computer programs that helped select targets. “We called ourselves warriors of the keyboard,” he said.
For years, I.D.F. officers had used periods of relative peace to assemble lists of suspected militants and structures to be targeted if a war broke out. The lists often took months or even years of painstaking work to compile; by late 2023, there were some two thousand Hamas targets and some ten thousand Hezbollah targets. But Israeli military officials told me that, after October 7th, the I.D.F. quickly burned through its lists. The opening phase of the campaign was extraordinarily fierce, with the military hitting several hundred targets a day. Even the American bombardments of Saddam Hussein’s forces in Iraq were never that intense. “The volume and velocity of bombing appear to be unprecedented in modern warfare,” a former senior American defense official told me.
In previous conflicts, Israel might have relented after a week or two. This time, the I.D.F. kept up the campaign, backed by political leaders. The former senior American defense official told me, “Israel set out to kill every fighter in Hamas. They had never done that before.”
To generate new targets, the I.D.F. tried an experiment. Working with the country’s bustling tech industry, intelligence officers had developed programs to help identify suspected Hamas and Hezbollah members and find where they were hiding. They included Lavender, which identified potential militants, and Gospel, which vetted both people and structures.
The I.D.F. employed these programs to sift huge amounts of data compiled on Gaza, everything from social-media posts to government records. They searched text messages and drone surveillance videos for patterns of suspicious activity. They examined people’s friends, relatives, and associates for links to Hamas. A former senior I.D.F. officer told me that Israel’s security agencies could record the millions of telephone conversations happening in Gaza each month, but didn’t have the manpower to listen to them all. So they tasked an A.I. with scanning the conversations and flagging any voice that matched a recording of a suspected militant from Israel’s files. The A.I. searched for keywords and pinpointed the locations of suspects’ phones. “Now, instead of thirty million conversations, we have one million conversations, and we can have our linguistic analysts listen to those,” he said.
A former senior Israeli military official said that the programs were constantly refining their own methods. “It’s not just finding the targets that’s important but how to locate the people more quickly as they move around,” he told me. “We are learning all the time. The A.I. is learning.” At one point, he said, intelligence officers determined that they could find places Hamas had buried rockets by identifying where the soil had shifted after heavy rains. So they used a program to scan hundreds of hours of drone footage and find disturbed soil, “even if it had moved only two centimetres. And then, like that, we created another two hundred targets.”
Much of the targeting work was done by Unit 8200, a wing of the I.D.F. whose function was to gather signal intelligence. For most of the war, it was run by General Yossi Sariel, who oversaw a team of twelve thousand, including targeters and linguists who worked from a desert airbase in Nevatim. The former senior I.D.F. officer told me that the targeters were meant to see artificial intelligence as a tool, not as a moral arbiter. “The purpose of the machine was to support the soldier, not replace him,” he said. “Our A.I. programs never took the decision to attack anyone. Only humans made those decisions.”
Michael, the targeter, described the process: The A.I., sifting the data, would suggest a target and list the factors, such as telephone contacts and video evidence, that supported a link to Hamas. Based on those, it would give an estimated likelihood that the person or building should be struck. “What we have is a priority queue,” Michael said. “The A.I. will say, ‘You should watch this guy.’ ”
Michael told me that his team was required to attempt to verify each target: examining video footage from drones, listening to telephone conversations. “My job in the targeting room was to put together all the indications and decide, What am I looking at?” he said. He added that he was also required to estimate how many civilians would be killed or wounded in an attack. If a suspected militant was in an apartment building, he would examine property records and drone footage to determine how many people lived there. “The A.I. thing says, ‘You should pay attention to this,’ and then I gotta do this whole checklist,” Michael said. “Who else is in the building? When did they leave?” In the course of a typical workday, the programs that Michael used would give his team about a hundred suggestions. He would select about five of them and send the recommendations to superior officers. “Usually two will be accepted,” he said.
As the battle raged, though, there were times when he felt pressure to decide on targets too quickly. “Sometimes I couldn’t do all the preparation and all of the checks that I should have,” he said. “Obviously, there were mistakes.” He added that he was comfortable with the final outcome of his work. But Adam Raz, an Israeli writer and activist, said other I.D.F. targeters had told him that in the most intense periods of the war their efforts were merely pro forma. “Most times, it took thirty seconds to a minute to get the target from Lavender or Gospel, verify it, and then give it to the Air Force to strike,” he said.
An estimated sixty thousand Palestinians have died in the conflict, prompting widespread accusations that Israel has committed war crimes. Yet Israeli authorities show little concern about the targeting systems. “We ended up, I believe, with about twenty-five thousand Hamas killed and twenty-five thousand civilians,” a former political leader told me earlier this year. “This is a better proportion than was ever achieved by a modern military.” When I ran that argument by John Spencer, a professor at West Point, he concurred that similar attempts to expunge enemies from densely populated areas had often resulted in higher proportions of civilian deaths. In 2016, the U.S. military initiated a campaign to root out ISIS from Mosul, Iraq, which killed about five thousand militants and twice as many civilians; the fighting ended up razing a city of two million people. In the Second World War, when the U.S. retook Manila from the Japanese, about seventeen thousand soldiers and a hundred thousand civilians were killed.
In Gaza, though, no one knows precisely how many people have died, or what proportion were innocents; the Gaza Health Ministry, which is run by Hamas, maintains that more than half were women and children. American officials suggested that the essential issue was one of human judgment. “The civilian casualties in Gaza were not an A.I. issue—they appear to be a rules-of-engagement issue,” Michael Horowitz, a Deputy Assistant Secretary of Defense in the Biden Administration, said. Michael, the targeter, told me that in the early stages of the conflict he was permitted to recommend a strike that could result in as many as twenty civilian deaths for one suspected militant. The strictures tightened and loosened over time, but they were typically relaxed for high-ranking figures. “When we killed Nasrallah, there were a lot of people in that building, you know?” he said.
The former senior U.S. defense official told me Israel permitted civilian casualties in numbers that, by American standards, were greatly disproportionate to the value of the militants being attacked. “During the invasion of Iraq, if we were contemplating hitting a target that might result in twenty-five civilians being killed, that’s a decision that would have gone all the way to the President or the Secretary of Defense,” he said. “In Gaza, that was happening every day.”
Sebastian Ben Daniel, a lecturer at Ben-Gurion University and a critic of the I.D.F., told me that the claims of precise targeting can’t be verified, because the way the A.I. systems function is largely a mystery. “How do we know that this person was a legitimate target?” he said. “We don’t know, because nobody can check. The algorithm is a black box. The military says it looks at millions of parameters. But what parameters? We don’t know.” A.I. systems like the ones that the I.D.F. used often fail to understand context; if someone says “watermelon” on the phone, the A.I. can’t tell if he’s making an oblique reference to a bomb or just talking about fruit. “You think this person was Hamas, because he met somebody in Hamas, or he called somebody in Hamas—so you kill him,” he said.
Ultimately, Ben Daniel argued, the purpose of the A.I. systems was to lend a veneer of legitimacy to a preconceived policy. “The goal was not to kill this guy or that guy, for which A.I. was sometimes useful,” he said. “The goal was the destruction of Gaza. A.I. gives you that effect without the public outcry.”
Even within the I.D.F., there is some concern that A.I. will displace human intelligence. The former senior I.D.F. officer told me that Israel had used a combination of technical and human means to track Hezbollah leaders in Lebanon, some of them for years. “We knew where Nasrallah was almost every single day,” he said. “We could have killed him whenever we were asked.” Right until the end, he said, Nasrallah was convinced that Israel would never strike him. He was only forty feet underground when the bomb hit.
The former senior Israeli official spoke proudly of a case in which targeters believed that a Hezbollah leader was hiding in a Beirut apartment, and wanted to gather details of its layout and surroundings. “We can send someone to the street to take photos,” he said. “We have people on the ground—but not inside.” To acquire more precise information, the I.D.F. developed a telephone that appeared to be registered in Lebanon. Then an agent posing as a wealthy expatriate called a real-estate broker in Beirut and said that she was interested in several properties on the same street. The former officer described the scheme: “Some nice woman will start with you on the phone. She’s very rich. Her father was from Lebanon. And she wants to buy the entire block.” The woman asked to hear details about the street, the apartment, the specific room where the target was thought to be; the broker provided all the information the I.D.F. needed. “Do you know how many people work for us without knowing that they work for us?” the former officer said.
Still, for Israeli security officials, small victories do not assuage the sense that they missed intelligence that might have forestalled a war altogether. The failure to prevent the October 7th attacks still weighs heavily. One cause, some officers told me, was an overreliance on intelligence gathered by technological means. Cameras set up along the border were easily disabled, and warnings from intelligence officers were ignored. The former Israeli military official told me that, during the attack, some militants switched off their cellphones to make themselves harder to track. Others simply left their phones home.
Indeed, the former official said, Israel had largely given up trying to cultivate human sources inside Hamas and Hezbollah. He said that the I.D.F., himself included, had fallen in love with technological methods because they seemed so easy to use, compared with the tedious and dangerous process of cultivating spies. “How many human souls did we have to describe the reality for us in Gaza and Lebanon on the night of October 7th?” he said. “Zero.”
The former official continued, “This is the main reason that created this great failure and caused us not to see what Hamas was planning to do. The feeling was ‘I don’t need to know you. I don’t need to know where you are going to pray, or what is your ideological way of thinking—I don’t need them because I have your phone.’ The trouble is, on the night they attacked, their devices were turned off.”
So far, Anduril has secured several billion dollars’ worth of military contracts, including one for sending drones to Taiwan. Early this year, the company announced that it was taking over a twenty-two-billion-dollar project, formerly run by Microsoft, to develop “augmented reality” headsets for the Army to use in combat. To produce its weapons, Anduril is planning to open a sprawling factory near Columbus, Ohio. Luckey told me that, in order to build a secure supply chain, none of the components would come from China.
Financial analysts have been speculating that Anduril will soon open investment to the public. Still, the essential question remains: In the uncertainties of combat, will Luckey’s unmanned systems work? Even admirers of the company evince some skepticism about weapons built around A.I. “I would take any claims of success with a grain of salt,” a former senior Pentagon official told me. “The Pentagon needs to do its own testing.”
On a lonely stretch of chaparral near Fort Stockton, Texas, I watched two Anduril engineers make their last adjustments before test-firing a Roadrunner—a five-foot-tall interceptor similar to the company’s attack drones, except that it is designed to crash into such airborne targets as jets, missiles, and drones. At about a hundred thousand dollars each, the Roadrunner isn’t Ukrainian-style cheap, but in the Pentagon’s arms bazaar it qualifies as a bargain. If it misses its target, it returns to base, to be fired again. “It lands just like a spaceship,” an engineer named Jackson Wiggs told me.
The Roadrunner is built to be launched out of its own packing crate; before the test flight, the engineers placed one of those crates in the scrub, as tumbleweeds skittered by. A low buzz from an intruding drone echoed from the other side of a nearby ridge. As the sound drew closer, Wiggs and his colleague pressed a button on a console. The sides fell from the packing crate, and the Roadrunner, a squat device that looked a little like a penguin, was propelled upward by two turbojets. It climbed to about three hundred feet before it turned and flattened until its fuselage was parallel to the earth. Then, like its namesake, the Roadrunner took off, sailing over the ridgeline. Seconds later, it shot past the intruding drone, missing it by a precisely calibrated distance. It circled back, righted itself, and landed neatly next to its packing crate. “Perfect,” Wiggs said.
Even as the Anduril engineers congratulated themselves on a successful test, people elsewhere were scrambling to create new advantages, under the messy conditions of war. Ukraine launched autonomous craft from catapults and snared Russian drones in fishing nets. Israel, in its recent conflict with Iran, deployed lasers to blast drones from the sky by burning up their guidance systems. An American company called BlueHalo is testing a similar device. It’s carried on a truck, and, after an investment of nearly a hundred million dollars, can fire individual shots for three dollars each. One day, it, too, will be eclipsed. ♦