Artificial Intelligence - Atlantic Council https://www.atlanticcouncil.org/issue/artificial-intelligence/ Shaping the global future together Fri, 30 Jan 2026 20:58:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png Artificial Intelligence - Atlantic Council https://www.atlanticcouncil.org/issue/artificial-intelligence/ 32 32 Dispatch from India: How a low-cost, high-quality consumer model can expand India’s AI adoption https://www.atlanticcouncil.org/dispatches/dispatch-from-india-how-a-low-cost-high-quality-consumer-model-can-expand-indias-ai-adoption/ Fri, 30 Jan 2026 20:58:10 +0000 https://www.atlanticcouncil.org/?p=902843 Applying the idea of India’s “sachet model” of low-cost consumer goods to AI services could accelerate AI adoption in the country.

The post Dispatch from India: How a low-cost, high-quality consumer model can expand India’s AI adoption appeared first on Atlantic Council.

]]>

Bottom lines up front

PUNE, INDIA—As India prepares for the AI Impact Summit on February 19 and 20, the Indian government’s pitch for wider adoption of artificial intelligence (AI) has centered on the potential for the technology to benefit Indian society. As my colleague Trisha Ray wrote in November, India “appears to be taking a people-centered approach, emphasizing use cases that have the greatest scope for positive impact for the widest swath of the population.”

During my time in Pune, a technology hub in Maharashtra, I spoke with local students, scientists, tech startup workers, and farmers. Though we met to talk about other topics, they consistently brought up AI and how they wished to take advantage of the technology. As I had more of these conversations, I found that to benefit the widest swath of the Indian population, the country should adopt a “sachet” approach to AI as a consumer product. The idea of applying the idea of sachets, or small packets used to package small consumer products, to AI is not new in India. But so far, this model has lacked proofs of concept and investment from the private sector, which has instead attempted to expand access by offering low-tier subscription models.

The sachet approach takes inspiration from the model that sparked India’s consumer goods revolution of the 1980s. Prior to that, Indian consumer products such as shampoo, talcum powder, and hair oil were sold in quantities of 50 grams (g) to 500 g. In the late 1970s, entrepreneur Chinni Krishnan found a niche in selling these products in cheap miniature packets, or sachets, containing as little as a few grams. By the mid-1980s, this more affordable consumer model made a wide variety of products more accessible to broader swaths of the Indian population.

Currently in India, a monthly ChatGPT or Perplexity Pro subscription costs ₹1,999 ($22.17) and a monthly Google AI subscription costs ₹1950 ($21.62). AI companies do recognize the need for less expensive and more accessible options, as the low-tier subscription services ChatGPT Go and Google AI Plus both cost ₹399 ($4.42) per month. Moreover, Google AI Pro and Perplexity Pro are also available for free for a year to eligible college students. And Perplexity AI partnered with telecom giant Airtel to offer a year’s free access to Perplexity Pro to Airtel’s 360 million subscribers. But this still leaves a huge portion of the potential Indian consumer market for AI untapped.

To make AI accessible to the widest possible swath of the population, AI developers should offer not just cheaper monthly subscription models but also sell the equivalent of a sachet of AI. This means offering small-scale uses of AI tools and applications for low fees. For example, one notable approach that has already been adopted is the IndiaAI Compute Pillar, which allows scholars, researchers, and startups to utilize computational power for less than a dollar per hour. To make this a scalable consumer product, however, the private sector would need data from the government on how the Compute Pillar is being used. Such data could make Compute Pillar a proof of concept for the AI sachet model. Under India’s AI Governance Guidelines, metrics for both the scale of adoption and how consistently the service is used could set the bar for whether this proof of concept should spur a larger-scale investment in such services.

India also has ample experience with scaling up such society- and accessibility-driven models. The Aadhaar biometric ID system, the Unified Payments Interface instant payment system, and the country’s digital public infrastructure (DPI) buildout were bottom-up models. For example, from 2011 to 2021, the number of Indian adults (ages fifteen and up) with a bank account rose from 35 percent to 80 percent thanks to this approach.

As an illustration for how this sachet model could be of use, think of the places where shampoo and other kinds of sachets are sold in India—usually small mom-and-pop stores run by one person or family. For such small stores, bookkeeping can be a laborious, time-consuming task. But with a ₹15 AI sachet, a shopkeeper could take photos of that day’s transactions, prompt an AI to parse the handwriting, and calculate revenue and inventory figures. If small business owners were to widely adopt AI sachets for such tasks, it would be a significant step toward demonstrating the scalability of the AI sachet model. This is how shampoos and other consumer goods expanded their footprints using the sachet model. 

During my trip to Pune, many of the people I spoke with were curious about how AI can help improve efficiency in areas including business, scholarship, research, management, and farming practices. When it comes to harnessing this demand for wider AI adoption, the government can play a major role in bringing stakeholders such as unions, cooperatives, and trade associations together with private sector AI developers to demonstrate the utility of AI for their respective fields.

At the AI Impact Summit, centering on the three sutras of “people, planet, and progress,” policymakers and tech company leaders should meet with small business owners, farmers (most of whom are small-scale), students, and others to discuss the benefits of AI adoption. Moreover, an AI impact case study in Pune or the wider state of Maharashtra could serve this purpose further, allowing the private sector and India’s AI governance model to bring more proofs of concept to empower society-driven, value-based AI adoption in India.

The post Dispatch from India: How a low-cost, high-quality consumer model can expand India’s AI adoption appeared first on Atlantic Council.

]]>
How India’s AI talent playbook can provide a blueprint for aspiring AI powers https://www.atlanticcouncil.org/blogs/geotech-cues/how-indias-ai-talent-playbook-can-provide-a-blueprint-for-aspiring-ai-powers/ Fri, 30 Jan 2026 17:49:25 +0000 https://www.atlanticcouncil.org/?p=902564 As host of the AI Impact Summit, India has the opportunity to build a framework that can help enable emerging economies tap the benefits of AI adoption.

The post How India’s AI talent playbook can provide a blueprint for aspiring AI powers appeared first on Atlantic Council.

]]>
In February, New Delhi will host the AI Impact Summit, a gathering of policymakers, industry leaders, and researchers, with the tagline “People, Planet, Progress.” This summit arrives at a turning point, as the center of gravity on artificial intelligence (AI) adoption shifts toward emerging economies, home to three-quarters of the world’s population. With the summit, India, already a leader in AI skill penetration, is positioning itself as a “shaper” rather than a mere “adopter” of these technologies.

But the success of the New Delhi summit will depend on how effectively it moves beyond rhetoric to address the realities of AI adoption, including the need for workforce development. To this end, on January 23, the Atlantic Council hosted an official pre-summit event in partnership with the Indian embassy in Washington, DC. The event opened with remarks by Ajay Kumar, minister (commerce) at the Indian embassy in Washington, DC, as well as Tess DeBlanc-Knowles, senior director of the Atlantic Council’s Technology Programs. This was followed by a panel discussion with Martijn Rasser, vice president for technology leadership at the Special Competitive Studies Project; Nicole Isaac, vice president for global public policy at Cisco; and Peter Lovelock, chief consultancy and innovation officer at Access Partnership. Below are some of the key takeaways from that discussion, as well as several of the panelists’ recommendations for how to approach these issues heading into the AI Impact Summit. The discussion underscored that while the potential for AI-driven growth is immense, the hurdles, ranging from a global talent shortage to fragmented labor data, require more than just market forces to overcome.

The global AI talent gap

The current global AI talent landscape can be viewed as a pyramid, according to Rasser. At the apex, he said, sits a cohort of around ten thousand elite PhD-level researchers and machine learning engineers. While the United States and China currently dominate this top layer of researchers, the real opportunity for emerging powers lies at the applied level. India possesses significant depth in its service sector, but the true challenge is building institutional readiness, ensuring that organizations can effectively channel available talent into high-value applications.

The most underappreciated deficit is not in raw coding but in AI-adjacent skills. There is a pressing need for product managers and domain experts who can bridge the gap between technical tools and organizational needs. For emerging economies, said Lovelock, the goal should not be to replicate Silicon Valley’s research labs, but to build an ecosystem where AI is “burned into” industrial applications such as supply chain management and export-import calculations.

AI infrastructure as workforce policy

“At its core, AI is designed, built, and deployed by humans,” noted Knowles. Indeed, a persistent theme for the global majority is that connectivity cannot be separated from workforce policy. Without reliable digital access, Isaac noted, billions remain excluded from the transformative benefits of AI. Security is another foundational layer; as AI environments become more complex, training in cybersecurity and digital resilience becomes essential to protect vulnerable populations from bad actors.

Trisha Ray, Martijn Rasser, Nicole Isaac, and Peter Lovelock at the Atlantic Council’s public panel, “Road to Impact Summit 2026: India’s AI talent playbook,” hosted on January 23, 2026.

Kumar, the Indian embassy official, laid out India’s strategy for a comprehensive five-layer “AI stack,” including sovereign models, semiconductors, and data centers. By providing compute power to educational institutions at a fraction of the global market rate, he argued, the government aims to democratize access across smaller cities. However, the widening digital divide remains a threat. If certain segments of the population are left behind, the resulting “have and have-not” divide could persist for generations, he said.

The other data problem

We cannot manage what we cannot measure. Policymakers, said Lovelock, are currently operating with “static” data that looks in the rearview mirror. Traditional labor statistics, often based on outdated surveys, are ill-suited for a fast-moving technology. Furthermore, labor data is often fragmented across various ministries, making it difficult to understand where the actual skill gaps lie.

Standard adoption metrics are increasingly irrelevant because individual AI use is highly varied. Instead of tracking who is using the technology, said Lovelock, governments need a “diffusion framework” that measures the actual impact of AI use on the economy. Only then can they make the strategic bets required for a long-term return on investment.

Four pillars for the summit’s AI talent agenda

Following from the panelists’ insights, the AI Impact Summit can deliver a scalable and inclusive AI talent framework by coalescing the global community around four primary actions:

  • Modernize education through personalized AI tools. Rather than sticking to the “one-to-many” broadcast model of traditional schooling, curricula should be reformed to put AI tools directly in the hands of students. This shift allows for personalized learning and ensures that students learn by doing, preparing them for a rapidly changing job market.
  • Create an AI Diffusion Index to measure actual adoption. Policymakers should move away from static adoption statistics and toward real-time data signals that measure how AI is being embedded into industrial and public services. This requires supplementing government surveys with nontraditional data sources to better align educational output with actual labor market demand.
  • Treat connectivity and security as foundational workforce issues. Investment in fiber and satellite infrastructure must be paired with training in digital resilience and cybersecurity. This ensures that the benefits of AI are shared broadly and that new users are protected from the heightened risks of an AI-ready environment.
  • Position government as the “first user” of new technologies. The public sector should take the lead in adopting AI for the delivery of public services in agriculture, healthcare, and education. By demonstrating the usefulness and accessibility of these tools within government, the state can send a powerful signal to the broader population and help accelerate national adoption.

The success of the AI Impact Summit will be measured not just by the declarations its participants make, but by the structural cooperation that survives past February. The summit offers a rare opportunity to pool global resources to solve the AI workforce crisis, replacing anecdotal evidence of AI adoption with rigorous data and flexible approaches to meet shifting workforce needs. At the summit, New Delhi has the opportunity to transform a week of dialogue into a sustained, collaborative framework that can help enable emerging economies to tap the benefits of AI adoption.


Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

Further reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post How India’s AI talent playbook can provide a blueprint for aspiring AI powers appeared first on Atlantic Council.

]]>
Abdillahi in Le Monde on artificial intelligence (AI) in Africa https://www.atlanticcouncil.org/insight-impact/in-the-news/abdillahi-in-le-monde-on-artificial-intelligence-ai-in-africa/ Fri, 30 Jan 2026 14:00:00 +0000 https://www.atlanticcouncil.org/?p=902746 On January 30, Yasmine Abdillahi, a nonresident senior fellow with the Africa Center published an article in Le Monde arguing that although Africa accounts for nearly 20% of the world’s population, it contributes less than 1% of the data used to train AI systems. This imbalance, she warns, risks excluding African languages, cultures, and lived […]

The post Abdillahi in Le Monde on artificial intelligence (AI) in Africa appeared first on Atlantic Council.

]]>

On January 30, Yasmine Abdillahi, a nonresident senior fellow with the Africa Center published an article in Le Monde arguing that although Africa accounts for nearly 20% of the world’s population, it contributes less than 1% of the data used to train AI systems. This imbalance, she warns, risks excluding African languages, cultures, and lived realities from artificial intelligence.

More about our expert

The post Abdillahi in Le Monde on artificial intelligence (AI) in Africa appeared first on Atlantic Council.

]]>
Inside the biggest Davos debates (other than Greenland) https://www.atlanticcouncil.org/dispatches/inside-the-biggest-davos-debates-other-than-greenland/ Mon, 26 Jan 2026 21:47:35 +0000 https://www.atlanticcouncil.org/?p=901265 As the annual World Economic Forum in Switzerland ends, the issues discussed—from tariffs to AI—will continue to play out in all corners of the world.

The post Inside the biggest Davos debates (other than Greenland) appeared first on Atlantic Council.

]]>

Bottom lines up front

DAVOS—This week Davos, Switzerland, returns to being a charming ski town. The shops and restaurants—temporarily rented by every major tech company on the planet to host events and receptions—return to their owners and will soon be filled with tourists on holiday.  

But what happened at the 2026 World Economic Forum won’t soon be forgotten. This was the year the forum changed policy. As one attendee told us on her way off the mountain, “Imagine what would have happened this week if Trump didn’t have to meet the Europeans face to face.” It’s an intriguing, if chilling, thought.

While Trump’s speech this past Wednesday and his subsequent decision to backtrack on Greenland threats drove the roller coaster news cycle of the week, there were several other notable moments that may have much longer term—and more important—policy repercussions. Here’s what we saw on the ground:

The two Davoses

Davos is always two different things at once. “Business Davos” is the place where executives huddle in Swiss office buildings negotiating deals far away from the TV cameras. This is, actually, what brings most people to the mountain year after year. That Davos traditionally operated independently from “geopolitical Davos.” That’s the Davos most people are familiar with—leaders from around the world speaking in the Congress Center, and academics, journalists, and think tankers debating on panels. 

Most years, those two Davoses can operate in their own spheres. But not this year. Last Monday, as markets swung sharply negative on the Greenland news, business Davos had its eyes glued to the Congress Center. Leaders of some of the largest companies in the world lined up and waited just like everyone else to get a seat. Suddenly, everyone was an expert on Nuuk, the Arctic, and whether military leases were a viable compromise. It was a reminder of a big lesson of the past few years—from the COVID-19 pandemic to Russia’s invasion of Ukraine—that finance and national security are deeply interconnected. In fact, there’s a good word for that—geoeconomics. 

The new reality of tariffs

One year ago Davos attendees watched Trump’s inaugural address and then listened to him virtually address the forum. He hardly said the word “tariffs” once between the two speeches, and the delegates decided that his threats during the campaign were just threats. What a difference a year makes. After twelve months of the biggest shock to the global trading system in decades, which left the world facing the long-term prospect of the US economy having a 10 percent or higher tariff rate, reality settled in on the mountain. Gone was the optimistic talk about how deregulation was going to lead to an investment boom. In its place was chatter about finding new trade arrangements with emerging markets, and forecasting what would happen if the Supreme Court rules against Trump in the tariff case. 

The risk and rewards of artificial intelligence

Few topics were more in the air in Davos than artificial intelligence (AI). Almost every billboard and storefront had a reference to AI—whether for supply-chain efficiency or content creation. On the surface, businesses wanted to project confidence, with AI positioned as the engine of future growth. But step inside these company events and a different picture emerged. Many featured chief risk officers or chief ethics officers, titles that barely existed a few years ago, grappling with questions around the different types of “risks,” whether those were geopolitical risks, economic risks, or climate risks. There was a stark contrast between the glossy AI optimism outside and the sober risk assessment on the inside of these conversations, and a reminder that for all the promise of growth, the industry knows the hard questions are just beginning.

More than a transatlantic affair

On the main stage and in the global news cycle, this Davos felt like a US–Europe affair. Tariffs announced and abandoned on European allies. French President Emmanuel Macron responding directly. US Treasury Secretary Scott Bessent outlining the health of the US economy. California Governor Gavin Newsom sparring rhetorically with Washington. For audiences watching from afar, it was easy to conclude this was a narrow, transatlantic Davos.

On the ground, however, the picture was far more global. Brazil House, India House, Indonesia House, and a dozen country pavilions were packed with programming all day. A large Pakistani delegation arrived on its own official shuttle bus. Philippines House ran cultural programs, including concerts featuring traditional music, alongside policy panels.

India, in particular, projected quiet confidence. Officials framed the country as a durable pillar of global growth, especially on AI. China maintained a low profile, with Chinese Vice Premier He Lifeng offering brief remarks about Beijing’s willingness to buy more foreign goods and services—a notably muted presence compared to previous years.

Yet the US footprint on the promenade was impossible to miss. The US delegation was one of the largest in Davos, anchored by a sprawling USA House with a dense schedule of events and receptions. From the number of officials and security on the ground to the symbolic bald eagle overlooking the promenade, the message was clear: US influence loomed over nearly every discussion. For all the activity in country pavilions, this remained a global forum shaped by great-power rivalry.

From Canada, a clarion call 

Canadian Prime Minister Mark Carney delivered one of the most consequential addresses during Davos, declaring that the post–Cold War rules-based international order is “in the midst of a rupture, not a transition.” Carney argued that great-power rivalry, economic coercion, and unilateral actions by dominant states (not mentioning Trump by name) have weakened longstanding global norms and institutions. He called on middle powers to work together to protect their interests and build new cooperative frameworks rooted in shared values. Simply going along to get along is no longer the answer, he argued. Whether other middle powers respond to that message may be the single most important question from this year’s forum. 

Descending the mountain

As delegates packed their bags and headed down the mountain, few were under any illusions. The convergence between business Davos and geopolitical Davos is the new reality. The tightrope that companies are walking is not getting any less precarious. And the question of whether economic cooperation can survive an era of rising geopolitics remains very much unanswered.

Next year’s forum may face these same tensions. The key question is whether the world will have found ways to navigate them successfully or whether the rupture Carney described will have deepened further.

The post Inside the biggest Davos debates (other than Greenland) appeared first on Atlantic Council.

]]>
Eight ways AI will shape geopolitics in 2026 https://www.atlanticcouncil.org/dispatches/eight-ways-ai-will-shape-geopolitics-in-2026/ Thu, 15 Jan 2026 23:35:20 +0000 https://www.atlanticcouncil.org/?p=899346 Experts from the Atlantic Council Technology Programs share their perspectives on what to expect from AI around the globe in the year ahead.

The post Eight ways AI will shape geopolitics in 2026 appeared first on Atlantic Council.

]]>
The events of 2025 made clear that the question is no longer whether artificial intelligence (AI) will reshape the global order, but how quickly—and at what cost.

Throughout the year, technological breakthroughs from both the United States and China ratcheted up the competition for AI dominance between the superpowers. Countries and companies raced to build vast data centers and energy infrastructure to support AI development and use. The scramble for cutting-edge chips pushed Nvidia’s valuation past five trillion dollars—the first company to reach that milestone—even as concerns mounted over circular financing and the question of how much the AI boom is founded on hype versus reality. Meanwhile, policymakers grappled with the balance between safety, security, and innovation and how to manage possible labor disruptions on the horizon.

As 2026 begins, rapid AI integration threatens to inject even more unpredictability into an already fragmented global order. Below, experts from the Atlantic Council Technology Programs share their perspectives on what to expect from AI around the globe in the year ahead.

Click to jump to an expert prediction: 

Emerson Brooking: AI poisoning goes mainstream

Tess deBlanc-Knowles: The US pushes AI tech exports to counter China

Konstantinos Komaitis: AI governance turns global

Ryan Pan: The US-China AI race intensifies in a multipolar world

Esteban Ponce de León: AI challenges human judgment

Trisha Ray: Countries go all in on ‘sovereign AI’

Mark Scott: The battle of the AI stacks escalates

Kenton Thibaut: China doubles down on AI-powered influence operations


AI poisoning goes mainstream

Russia’s Pravda network of websites has published millions of articles targeting more than eighty countries. These sites launder and amplify content from Russian state media, seeking to legitimize Russian military aggression while casting doubt on Western support for Ukraine. Most of these articles will never be viewed by a human. Instead, they seem intended to target the web crawlers that scour the internet for training data to feed to insatiable AI models.

And the strategy is working. Last year, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and CheckFirst demonstrated how mass-produced Pravda articles were cited in Wikipedia, X Community Notes, and responses from major chatbots. Parallel research by Anthropic and the United Kingdom’s AI Safety Institute has shown how trace amounts of faulty data can effectively “poison” even very large models. People increasingly turn to AI systems to understand current events. If an AI model’s knowledge has been altered by sources intended to deceive, then the users’ will be, too.

In 2026, the issue of AI poisoning will break into the mainstream. Because of a roughly two-year lag in AI training data (many AI models are still waiting for the results of the 2024 US presidential election, for instance), these AI-targeted propaganda campaigns are about to start manifesting more often. And because one cannot reliably audit what’s inside a deployed AI model, the result will be a staggering research and policy challenge.

Digital policy experts, including the DFRLab, have spent a decade learning to identify, explain, and expose online disinformation where people can see it. This is online disinformation where they can’t.

Emerson Brooking is the director of strategy and a resident senior fellow with the Atlantic Council’s Digital Forensic Research Lab.


The US pushes AI tech exports to counter China

In 2026, the United States will double down on exporting the US tech stack as the cornerstone of its international AI strategy. In December 2025, US President Donald Trump set the tone with his decision to allow Nvidia to export its advanced H200 chips to China, a clear endorsement of the view that the United States wins when the world builds and deploys AI using US technology.

Published days before the Nvidia decision, the Trump administration’s National Security Strategy makes this explicit: “We want to ensure that US technology and US standards—particularly in AI, biotech, and quantum computing—drive the world forward.” This framing echoes the AI Action Plan the administration released in July 2025, which stated that the “United States must meet global demand for AI by exporting its full AI technology stack,” warning that a failure to do so would be an “unforced error.”

In 2026, expect to see the United States sign more AI-focused partnerships like those forged with Saudi Arabia and the United Arab Emirates in 2025, alongside efforts to counter China’s growing influence in emerging markets. But as the United States makes this push, China holds some key advantages. Its lead in open-source AI models and focus on applied AI could prove to be the winning formula for capturing global market share with free models and deployment-ready technologies.

Tess deBlanc-Knowles is the senior director of Atlantic Council Technology Programs.


AI governance turns global

In 2026, AI governance enters its first truly global phase with the United Nations–backed Global Dialogue on AI Governance and Independent International Scientific Panel on AI. For the first time, nearly all states have a forum to debate AI’s risks, norms, and coordination mechanisms, signaling that AI has crossed into the realm of shared global concern.

Yet this ambition unfolds amid acute geopolitical tension: The European Union pushes a rights- and risk-based regulatory model, while the United States favors voluntary standards to preserve innovation and security flexibility. For its part, China promotes inclusive cooperation while defending state control over data and AI deployment. Smaller and developing states gain a voice but remain structurally dependent on the major powers that control the bulk of AI talent, capital, and computing power.

The result is a fragile, uneven global framework. States converge on scientific assessments, transparency norms, and voluntary principles, but they avoid binding limits on high-risk AI uses such as autonomous weapons, mass surveillance, or information manipulation. Coordination emerges, but the core strategic competition remains unresolved, producing a governance architecture that manages risks at the margins while leaving rival models largely intact.

By the end of 2026, the Global Dialogue will likely have made AI governance global in form but geopolitical in substance—a first test of whether international cooperation can meaningfully shape the future of AI or merely coexist alongside competing national strategies. This juncture offers states an opportunity to demonstrate leadership by strengthening institutional capabilities and collaborative mechanisms, fostering a global AI governance framework that is more coherent, equitable, and universally engaged.

Konstantinos Komaitis is a resident senior fellow with the Atlantic Council’s Democracy + Tech Initiative.


The US-China AI race intensifies in a multipolar world

The year ahead will see an even fiercer competition over AI dominance between the world’s two largest powers—the United States and China—while middle powers gradually close the gap in the race. China’s DeepSeek started off this year with a research paper on a new AI training method to efficiently scale foundational models and reduce costs. This publication comes almost exactly a year after the headline-making paper it released in January 2025, which was followed by the launch of DeekSeek-R1. The timing of this year’s new publication signals that the company will launch new models and continue shaping the world’s AI industry this year.

In 2026, expect China to double down on its open-source AI strategy to influence the world’s AI infrastructure. (Several major US tech companies are already using Chinese large language models in their applications.) The United States and China may also engage in further trade retaliation in the AI supply chains in light of recent developments in Venezuela, from which Chinese companies had gained access to rare earth minerals crucial to developing the AI stack. The Trump administration’s recent claims regarding Colombia, from which China also sources rare earth elements, could make Latin America the next technology battleground between the two powers.

But what about powers beyond the United States and China? In 2026, look for Europe to increase its AI defense investments even more than it did in 2025. Middle powers, notably India, will see their AI capability greatly improved this year, as US tech giants have recently pledged billions in investments in India’s AI capabilities. 

The AI race in 2026 will still be defined by a multipolar order. Nevertheless, the United States and China will continue to yield the greatest influence.

Ryan Pan is a program assistant with the Atlantic Council’s GeoTech Center.


AI challenges human judgment

In 2026, human–AI interaction will likely challenge human judgment and identity more deeply than in any year to date. This is not only because AI models are demonstrating increasingly complex capabilities, but also because AI-generated content can be so emotionally charged in today’s polarized information environment.

Online sources and social media have shown how polarization can be deliberately targeted, and the use of AI to generate fabricated or distorted content adds a new layer to how social and political events are interpreted. AI content is reshaping the dynamics of both manipulation and what could be described as a “misinformation game,” in which techniques such as the deployment of AI slop and the memeification of events are used to mock adversaries and amplify key propaganda narratives. For example, in June 2025, amid the Israel-Iran escalation, AI became the new face of propaganda. This included graphic and sensational AI-generated fake content, such as fabricated missile strikes, military hardware, religious and national symbols, and memes. But it also included more sophisticated fabrications of CCTV footage that became increasingly difficult to debunk.

In the first days of 2026, as the Trump administration captured Venezuelan strongman Nicolás Maduro, the use of AI to generate media content increased drastically. While much of this content was humorous or satirical in nature, it nonetheless illustrates emerging usage patterns, as playful AI-generated media can still shape perceptions of power and blur the line between satire, manipulation, and propaganda. Whether fabricated content aims to provoke humor or confusion, human judgment will face new challenges in the year ahead.

This challenge to human judgment and identity extends beyond misinformation. In 2026, the AI landscape may begin to show early signs of benchmark saturation, in which models converge at near-maximum scores on established capability tests, collapsing the measurable differences between them. This matters for the information environment because the same logic applies: If distinguishing real from fabricated content becomes difficult, then so too does distinguishing what humans uniquely contribute from what AI can replicate. The implications extend to professional identity and how to understand individual value and competence.

Esteban Ponce de León is a resident fellow with the Digital Forensic Research Lab.


Countries go all in on ‘sovereign AI’

There are unprecedented amounts of capital flowing in to meet the anticipated demand for AI. Last year, for instance, kicked off with Trump’s announcement of Stargate, with the aim of investing $500 billion in AI infrastructure over five years. The principle driving this trend is straightforward: Countries think they must control AI before it controls them. Consequently, there was a wave of sovereign AI announcements in 2024 and 2025.

That momentum will only grow in 2026, starting with the launch of India’s sovereign large language model at the AI Impact Summit in February. Nations are seeking sovereign AI to strengthen their domestic economies, protect national security, mitigate geopolitical shocks, and reflect national values. However, there’s a catch: Not every country can, or should, try to build every part of the AI stack on its own. Trying to recreate from scratch everything from data centers to models is expensive, redundant, and impractical. Nations will need to choose what to build, what to buy, and where partnerships make more sense than going solo.

Trisha Ray is an associate director and resident fellow with the GeoTech Center.


The battle of the AI stacks escalates

As AI becomes more central to countries’ economic prospects, national policymakers will likely seek to impose greater control over critical digital infrastructure. This infrastructure includes compute power, cloud storage, microchips, and regulation, and it is central to how emerging AI technology will develop in 2026. For the world’s largest digital powers—the United States, the European Union, and China—the push to control this infrastructure will likely evolve into a battle of the “AI stacks”—increasingly opposing approaches to how such core digital AI-enabling infrastructure functions at home and abroad.

The White House’s AI Action Plan, published in July 2025, made it the stated policy of the federal government to export the US stack to third-party countries, including via potential funding support from the US Department of Commerce for other governments to purchase offerings from the likes of Microsoft, OpenAI, and Nvidia. The European Commission has earmarked billions of euros for so-called AI gigafactories, or high-performance computing infrastructure, from Estonia to Spain, while national leaders also vocally called for a “Euro stack.” The Chinese Communist Party is urging local firms to forgo Western AI know-how and rely instead on domestic alternatives from companies such as Alibaba or Huawei.

The rest of the world will have to navigate these increasingly rivalrous approaches to AI infrastructure at a time when all countries seek greater control of so-called digital public infrastructure—that is, the underlying hardware and, increasingly, software needed to power complex AI systems. How these different AI stacks interact with each other will be critical to how the technology develops over the next twelve months.

Mark Scott is a senior resident fellow with the Atlantic Council’s Democracy + Tech Initiative.


China doubles down on AI-powered influence operations

In 2026, the People’s Republic of China’s (PRC’s) AI-enabled disinformation efforts are likely to intensify in scale, persistence, and technical sophistication, particularly those targeting Taiwan. PRC actors are already using AI-generated audio, video, and text, distributed through networks of fake accounts and contracted private firms, to conduct “cognitive warfare” campaigns aimed at shaping political perceptions and voter behavior. These campaigns prioritize volume, localization, and algorithmic exploitation, and they are increasingly designed to be continuous rather than episodic. As AI-generated content is blended with human-curated messaging and commercial infrastructure, PRC-linked operations will become harder to detect and attribute, reflecting a shift toward more deniable, adaptive, and professionalized influence operations.

At the same time, Beijing is expected to pair these activities with defensive diplomatic messaging that rejects allegations of PRC-linked disinformation or cyber operations and reframes such claims as politically motivated attacks. This pattern reinforces a broader hybrid strategy in which AI-enabled influence operations, cyber activity, and diplomatic signaling are tightly integrated. In 2026, PRC disinformation campaigns are likely to focus less on overt propaganda and more on shaping narratives around crises and cyber incidents, contesting blame, eroding trust in attribution, and influencing strategic decision-making outcomes.

Kenton Thibaut is a senior resident fellow with the Democracy + Tech Initiative. 

The post Eight ways AI will shape geopolitics in 2026 appeared first on Atlantic Council.

]]>
Ukraine’s robot army will be crucial in 2026 but drones can’t replace infantry https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-robot-army-will-be-crucial-in-2026-but-drones-cant-replace-infantry/ Thu, 08 Jan 2026 21:33:37 +0000 https://www.atlanticcouncil.org/?p=897956 Ukraine's growing robot army of land drones will play a vital role in the country's defense during 2026, but they are not wonder weapons and cannot serve as a miracle cure for Kyiv’s manpower shortages, writes David Kirichenko.

The post Ukraine’s robot army will be crucial in 2026 but drones can’t replace infantry appeared first on Atlantic Council.

]]>
Ukrainian army officials claim to have made military history in late 2025 by deploying a single land drone armed with a mounted machine gun to hold a front line position for almost six weeks. The remote-controlled unmanned ground vehicle (UGV) reportedly completed a 45-day combat mission in eastern Ukraine while undergoing maintenance and reloading every 48 hours. “Only the UGV system was present at the position,” commented Mykola Zinkevych of Ukraine’s Third Army Corps. “This was the core concept. Robots do not bleed.”

News of this successful recent deployment highlights the potential of Ukraine’s robot army at a time when the country faces mounting manpower shortages as Russia’s full-scale invasion approaches the four-year mark. Robotic systems are clearly in demand. The Ukrainian Ministry of Defense has reported that it surpassed all UGV supply targets in 2025, with further increases planned for the current year. “The development and scaling of ground robotic systems form part of a systematic, human-centric approach focused on protecting personnel,” commented Defense Minister Denys Shmyhal.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The current emphasis on UGVs is part of a broader technological transformation taking place on the battlefields of Ukraine. This generational shift in military tech is redefining how modern wars are fought.

Since the start of Russia’s full-scale invasion in February 2022, homegrown innovation has played a critical role in Ukraine’s defense. Early in the war, Ukrainian troops deployed cheap commercial drones to conduct reconnaissance. These platforms were soon being adapted to carry explosives, dramatically expanding their combat role. By the second year of the war, Ukraine had developed a powerful domestic drone industry capable of producing millions of units per year while rapidly adapting to the ever-changing requirements of the battlefield.

A similar process has also been underway at sea, with Ukraine deploying domestically produced naval drones to sink or damage more than a dozen Russian warships. This has forced Putin to withdraw the remainder of the Black Sea Fleet from occupied Crimea to Russia itself. Recent successes have included the downing of Russian helicopters over the Black Sea using naval drones armed with missiles, and an audacious strike on a Russian submarine by an underwater Ukrainian drone.

By late 2023, drones were dominating the skies over the Ukrainian battlefield, making it extremely dangerous to use vehicles or armor close to the front lines. In response to this changing dynamic, Ukrainian forces began experimenting with wheeled and tracked land drones to handle logistical tasks such as the delivery of food and ammunition to front line positions and the evacuation of wounded troops.

Over the past year, Russia’s expanding use of fiber-optic drones and tactical focus on disrupting Ukrainian supply lines has further underlined the importance of UGVs. Fiber-optic drones have expanded the kill zone deep into the Ukrainian rear, complicating the task of resupplying combat units and leading to shortages that weaken Ukraine’s defenses. Robotic systems help counter this threat.

Remote controlled land drones offer a range of practical advantages. They are more difficult to jam electronically than aerial drones, and are far harder to spot than trucks or cars. These benefits are making them increasingly indispensable for the Ukrainian military. In November 2025, the BBC reported that up to 90 percent of all supplies to Ukrainian front line positions around Pokrovsk were being delivered by UGVs.

In addition to logistical functions, the Ukrainian military is also pioneering the use of land drones in combat roles. It is easy to see why this is appealing. After all, Ukrainian commanders are being asked to defend a front line stretching more than one thousand kilometers with limited numbers of troops against a far larger and better equipped enemy.

Experts caution that while UGVs can serve as a key element of Ukraine’s defenses, they are not a realistic alternative to boots on the ground. Former Ukrainian commander in chief Valerii Zaluzhnyi has acknowledged that robotic systems are already making it possible to remove personnel from the front lines and reduce casualties, but stressed that current technology remains insufficient to replace humans at scale.

Despite the advances of the past four years, Ukraine’s expanding robot army remains incapable of carrying out many military functions that require infantry. When small groups of Russian troops infiltrate Ukrainian positions and push into urban areas, for example, soldiers are needed to clear and hold terrain. Advocates of drone warfare need to recognize these limitations when making the case for greater reliance on unmanned systems.

UGVs will likely prove vital for Ukraine in 2026, but they are not wonder weapons and cannot serve as a miracle cure for Kyiv’s manpower challenges. Instead, Ukraine’s robot army should be viewed as an important part of the country’s constantly evolving defenses that can help save lives while raising the cost of Russia’s invasion.

David Kirichenko is an associate research fellow at the Henry Jackson Society.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values, and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia, and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s robot army will be crucial in 2026 but drones can’t replace infantry appeared first on Atlantic Council.

]]>
Engaging generative artificial intelligence in African development https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/engaging-generative-artificial-intelligence-in-african-development/ Thu, 18 Dec 2025 20:03:45 +0000 https://www.atlanticcouncil.org/?p=893977 From classrooms to farming communities, generative artificial intelligence holds great potential for Africa. The question is whether its promise of abundance will reach everyone—or only those already well-connected.

The post Engaging generative artificial intelligence in African development appeared first on Atlantic Council.

]]>

Executive summary

From classrooms to farming communities, generative artificial intelligence (gen AI) holds great potential for Africa. The key question is whether its promise of abundance will reach everyone—or only those already well-connected.

The technology should be regulated with both its strengths and weaknesses in mind, and approached with a healthy dose of skepticism toward corporate advocates; but ignoring the obvious value and use of gen AI makes little sense. Those concerned with development in Africa must engage with the technology and consider its potential for reducing poverty and strengthening education, alongside other priorities such as digitizing and preserving languages.

Gen AI poses real risks and requires guardrails, especially for young people. Yet disengagement carries risks of its own: if gen AI is not actively shaped and governed, the very youths and communities it could benefit—or harm without proper controls—risk being left behind. Not engaging with gen AI would be not only harmful but also patronizing. More conversation is needed between those inventing and implementing gen AI models and those who work in development assistance, including actors involved in shaping and advancing the UN Sustainable Development Goals (SDGs). Two of these SDGs—ending poverty and providing quality education—closely mirror gen AI’s promise, or boast, of future “abundance” and human or even superhuman intelligence. The SDG and gen AI camps must explore what each can realistically offer the other.

View the full issue brief

Related content

In partnership with

Explore the program

The Africa Center works to promote dynamic geopolitical partnerships with African states and to redirect US and European policy priorities toward strengthening security and bolstering economic growth and prosperity on the continent.

The post Engaging generative artificial intelligence in African development appeared first on Atlantic Council.

]]>
Why exporting advanced chips to China endangers US AI leadership https://www.atlanticcouncil.org/dispatches/why-exporting-advanced-chips-to-china-endangers-us-ai-leadership/ Wed, 10 Dec 2025 18:21:15 +0000 https://www.atlanticcouncil.org/?p=893254 Allowing Chinese companies to purchase high-end AI chips risks degrading the United States’s current edge in aggregate computing power.

The post Why exporting advanced chips to China endangers US AI leadership appeared first on Atlantic Council.

]]>

Bottom lines up front

WASHINGTON—In a Truth Social post on Monday that shook up the global tech race, US President Donald Trump announced his approval for Nvidia to sell its H200 (“Hopper”) series chips to “approved customers” in China, with the United States receiving a 25 percent cut of the revenues.

This marks the latest pendulum swing in the administration’s approach to export controls on advanced artificial intelligence (AI) chips. In July, Trump allowed the sale of Nvidia’s less powerful H20 chips to China with a 15 percent revenue share requirement, pulling back from an April announcement that his administration would ban the sale of those same chips. Even the same morning of Trump’s announcement, the US attorney’s office in Houston trumpeted the disruption of a smuggling operation focused on exporting H200 and the older H100 chips to China. 

In his post on Monday, Trump said that Chinese President Xi Jinping “responded positively” to the decision; on Tuesday a spokesperson for China’s foreign ministry dodged a question about the deal. If Xi is on board, this is significant, since when the H20 controls were lifted, China’s Cyberspace Administration banned Chinese firms from purchasing H20s, citing security concerns. Whether Xi took this step to protect domestic chip manufacturers or as a bet to unlock higher-performing exports (such as the H200s) remains unclear. 

While the H200 far surpasses the capabilities of the H20, it’s still a generation behind Nvidia’s cutting-edge Blackwell chips and will soon be overshadowed by the forthcoming Rubin architecture. Prior to meeting with Xi in October, Trump floated the idea of allowing Blackwell exports. But following the meeting, Trump said that the topic did not come up. Notably, Monday’s announcement stops short of allowing the export of Blackwell chips. 

The Trump administration’s rationale

The Trump administration’s calculus comes down primarily to economics and the belief that projecting US technology abroad strengthens national power. Allowing the export of H200s to China will provide Nvidia access to the world’s largest single market and likely ensures that the next generation of Chinese AI runs on US hardware. 

Proponents of this approach claim this move could slow the development of China’s indigenous AI capabilities by cutting off revenue to companies such as Huawei as sales divert to Nvidia. Under Xi’s leadership, China has undertaken a concerted national strategy to build a domestic chip manufacturing capability and break free from dependence on Western technology. 

The 25 percent cut from sales to the US government gives the administration another means to tout benefits to the taxpayer. Still, recent reports of a special security review that the chips will undergo before export raise questions about how processes will be structured to legally charge this fee. Expect more from the administration in the coming days on how it will navigate this novel approach.

By approving exports of H200 chips but not Blackwell chips, the administration is attempting to strike a compromise position between those who see the advantages of strengthening Nvidia’s global market share and those worried about eroding the United States’ AI advantage.

US President Donald Trump and Chinese President Xi Jinping talk as they leave after a bilateral meeting in Busan, South Korea, on October 30, 2025. (REUTERS/Evelyn Hockstein)

The real implications

The United States and China are locked in an existential race for AI supremacy. Until now, the United States’ one true advantage has been access to cutting-edge compute. 

In recent years, China has proven that it can build frontier models that rival the performance of the leading models in the United States. It produces top AI talent and has cultivated a vibrant AI start-up ecosystem. Chinese companies have access to the same data as their US counterparts while also benefiting from internal data, such as that stemming from China’s surveillance state and widespread AI deployment. China also has a leg up in terms of energy generation, producing more than twice the electricity that the United States did in 2024.

Where the United States maintains a definitive edge is on aggregate computing power. As of mid-2025, the US share of global AI computing power reached 74 percent, with China at only 14 percent. Aggregate computing power is critical for training new frontier models, supporting the widespread use of AI and new applications of the technology, and exploring new architectures and pathways toward more powerful systems. Recent reporting finds that much of the compute used by companies such as OpenAI is in service of research. 

Allowing Chinese companies to purchase H200 chips will significantly degrade this advantage. Chinese companies will likely pursue a strategy of scale, networking H200 chips into clusters that could rival the performance of Blackwell chips, albeit with a higher price tag. This is a strategy already widely employed in China to maximize the performance of their domestically produced, lower-end chips. With access to H200 chips, Chinese firms will be positioned to train the next generation of models and provide cloud-computing services beyond their borders. This would put them into competition with US providers for international market share and fundamentally undermine the Trump administration’s goal of establishing the US AI tech stack as the global standard. 

Estimates for how far China’s domestic chip manufacturing capability lags that of the United States range from five to fifteen years. Currently, China cannot produce at scale to meet domestic demand. The Trump administration has estimated, for example, that major Chinese tech giant Huawei can only produce 200,000 of its Ascent AI chips this year, which is only 1-2 percent of estimated US production. Access to H200s could bridge this gap, allowing Chinese AI companies to compete globally until domestic manufacturing capability has reached parity. At which point, they would almost certainly move away from Nvidia. 

From a national security perspective, many fear that H200 chips will not only bolster Chinese industry but also the People’s Liberation Army’s defense capabilities. Given China’s civil-military fusion doctrine, restricting sales to approved corporate entities likely won’t prevent military use. 

Finally, the question remains whether Nvidia has the capacity to serve the Chinese market without eroding its ability to meet demand from US companies. Already, surging demand from data center build-outs is putting stress on the supply chain, and research universities are struggling to procure chips to support crucial research and education efforts. 

As China moves forward to aggressively integrate AI into every aspect of its economy and society, as outlined in its recent “AI plus” initiative, providing the computational fuel to realize this vision will supercharge the United States’ strongest AI competitor, significantly endangering the Trump administration’s own global AI ambitions.

The post Why exporting advanced chips to China endangers US AI leadership appeared first on Atlantic Council.

]]>
Cloudbusting: Policy for evaluating trust in compute infrastructure https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/cloudbusting-policy-for-evaluating-trust-in-compute-infrastructure/ Wed, 03 Dec 2025 14:00:00 +0000 https://www.atlanticcouncil.org/?p=890037 A global cloud built on technical assurances—not geography—is essential to securing critical infrastructure and the future of AI.

The post Cloudbusting: Policy for evaluating trust in compute infrastructure appeared first on Atlantic Council.

]]>

Table of contents

Executive summary

Placing trust in cloud computing is no longer optional. Cloud computing is essential to critical infrastructure, commercial, and government operations.1 Outages over the past few months emphasize the vitality of cloud services to modern economies and essential government services.2 As cloud adoption and transformation continue, policy attention should shift from the question of whether to simply trust cloud computing to the methods for establishing and verifying that trust.  

The stakes will only continue to increase as artificial intelligence systems, which have been identified by the US, China, and the European Union as essential national priorities, continue to utilize cloud infrastructure for development and deployment.3 Sophisticated and unsophisticated threat actors continue to target cloud computing systems, striking rapidly, globally, and opportunistically. 4 These cloud incidents can result in data theft, financial losses, and operational disruptions. Even accidents require rapid coordination and information sharing to ensure systems can get back up and running as quickly as possible.5

Ensuring trust in cloud computing systems between nations and cloud providers is an essential task for modern economies, national security, and ways of life. This report argues that cloud trust will require collaboration between providers, nation states, and customers, but should not start with location requirements and geographic restrictions on access to cloud computing. Instead, national cloud policies should prioritize criteria of trust that verifiably and meaningfully improve the security of customer cloud operations. 

Introduction

As a component of artificial intelligence deployment, development, and use, as well as an enabling technology for business, government, and critical infrastructure functions, cloud computing is a fixture of cyber policy discussions. Within the emerging AI supply chain, cloud services are the means of deploying ‘compute’, a critical resource powering models in both training and inference throughout the global economy.6 This paper aims to offer a nuanced discussion of cloud computing through the consolidation of a shared policy vocabulary and common technical principles to describe and understand trust in cloud computing. By adding detail to how cloud computing is portrayed, policymakers can more effectively understand the systems they’re expected to trust. By adding granularity to existing discussions of cloud computing, policymakers can more effectively understand the systems they are expected to trust and better appreciate how policy shapes both those systems and that trust.  

Attackers continuously scan public-facing devices and infrastructure for misconfigurations and weaknesses.7 Countries with advanced cyber capabilities, including Russia, China, North Korea, and Iran, show no signs of ceasing cyber threat activity.8 The pace of vulnerability exploitation continues to accelerate, and within days of their public disclosure, attackers weaponize vulnerabilities to gain access to and exploit cloud environments.9 Meanwhile, policy debates often focus on limiting access from cloud providers to customer information, instead of ensuring the security of such resources and information from adversary access. 

Developing a more compelling model and framework for trust in cloud computing requires bridging debates around localization, digital sovereignty, and technical security, as well as emerging trends in artificial intelligence development and deployment. The risks posed to the cloud ecosystem by the unintended consequences of policy intervention are significant, but so too are the consequences of untrusted and insecure cloud deployments. 

A shared cloud computing vocabulary 

This section will establish essential vocabulary and terms for cloud computing. The terms and characteristics defined here are non-exhaustive but are a useful starting point for cloud policy discussions. Cloud computing describes a model where service providers offer metered, on-demand access to computing resources.10 Instead of operating their own servers and facilities, customers specify workloads—sets of defined computing tasks, utilizing computing resources—for which cloud providers handle implementation and execution.11 Sometimes the resources used by these workloads are virtual versions of physical resources (“virtual machines” or VMs), but often they are abstract resources or functions, such as data storage or analysis services, and are not rooted in or wedded to specific hardware or software implementations. Cloud providers must manage and architect both individual hardware and software components and the protocols, pathways, and constraints of their communications and interactions. To ensure visibility and reliability, cloud providers must build systems that carefully manage changes, catch and alert on outages, and gracefully handle errors or failures. This model of access to computing resources includes general access to applications and data storage, but also specialized services for specific customers or sectors. 

Cloud providers aggregate and distribute workloads across computing resources. Centralized control over the design, development, testing, and maintenance of both hardware and software enables cloud providers to reduce costs while optimizing their services for the performance and reliability needs of customers.12 End-to-end control over cloud systems also allows costly experimentation with specially-tailored or developed software and hardware, including custom advanced semiconductors (chips, silicon).13 Prominent cloud providers, including Microsoft, Google, and Amazon, derive advantages from both the scale of their cloud infrastructure and their expertise in adjacent fields and product offerings (earning them the name hyperscalers”).14 For example, Google’s development of its distributed data storage and processing platform BigTable was driven by the computing demands of its search product.15 Embedded within the configurations and offerings available to customers are a cascading sequence of impactful decisions made by cloud service providers. Balancing incentives, imperatives, and resource constraints creates an ever-evolving system of systems that is more than the sum of its parts. 

Customers of cloud providers can adjust their use of computing resources elastically. Instead of purchasing physical hardware, launching software, and monitoring it directly for power outages or reliability issues, customers can outsource those responsibilities to cloud providers. This allows companies to focus on their unique products and services instead of monitoring and maintaining networking, energy, and processing equipment. Decomposing workloads into discrete tasks, scheduling tasks for individual hardware components, and monitoring the execution of those tasks for errors, delays, or hardware failures requires carefully optimized software, specific hardware, and dedicated research capabilities.16 Within nano- or milliseconds, cloud computing systems communicate and synchronize across oceans and continents, ensuring availability and reliability despite frequent outages, hardware failures, and natural disasters. Using metered, elastic cloud services also allows companies to “scale” their computing resource footprint in response to demand.17 Seasonal surges, such as a boom in visits to e-commerce sites around the holiday season, or daily and weekly patterns, such as workplace software peaking in use during business hours Monday-Friday, no longer require projections months ahead of time, and the build-up of infrastructure to handle maximum demand, which then sits idle outside of specific moments. Instead, enterprises can dynamically and automatically adjust their use of computing resources and services through their cloud providers.18 At the global scale of modern cloud systems, cloud providers triage and respond to issues that would be completely unfamiliar to self-hosted cloud operators accustomed to handling only hundreds or thousands of servers.  

Cloud customers and providers optimize the architecture of services to support different computational demands, using distinct technology configurations to execute workloads. Cloud computing architectures dictate “how various cloud technology components, such as hardware, virtual resources, software capabilities, and virtual network systems interact and connect to create cloud computing environments.”19 Workloads such as high-definition video streaming, training AI models, and analyzing and extracting information from data have different requirements for synchronicity, availability, reliability, and error tolerance, which demand different choices of software and hardware to balance tradeoffs. By optimizing cloud infrastructure and systems for different tasks, cloud providers can utilize heterogeneous components to their full relative advantages.  

As an example, to ensure rapid access to cloud resources, providers maintain and offer content delivery networks (CDNs)—networks of servers and computing resources distributed worldwide to minimize the distance and latency (time delay) between cloud infrastructure and end-users.20 Cloud providers also maintain points of presence, or edge locations, where their infrastructure connects with internet service providers, on-premise customers, or other cloud providers.21 These points of connection include Internet Exchange Points (IXPs) and other co-location services, a subset of which are sometimes referred to as peering locations.22 Network infrastructure, including edge servers, is a critical vantage point for information useful for security monitoring and incident response. Security practices involving network infrastructure range from mitigating attacks that attempt to overwhelm servers with large amounts of requests to limiting unauthorized access to data and cloud resources.23

Cloud computing and artificial intelligence 

Cloud computing is involved in AI development and deployment at every stage, from providing data storage and structures to enabling interactions between models and users, all while serving as a central hub of monitoring and evaluation for AI systems. Artificial intelligence companies have close financial and technical relationships with hyperscale cloud providers, and cloud providers themselves develop their own AI models and integrate them with other products. This section will give a brief overview of the importance of cloud computing to artificial intelligence development and deployment as a component of the broader compute infrastructure used in the development and deployment of AI systems. 

Emerging players, sometimes referred to as neo-clouds, also offer cloud computing services specific to artificial intelligence workloads. CoreWeave, Lambda, Crusoe, and Nebius all operate under this model.24 These companies are financially intertwined with both existing hyperscale cloud providers and key chipmaker NVIDIA. NVIDIA has invested in both Lambda and CoreWeave, in addition to its own quasi-cloud offering, which is built on the infrastructure of other cloud service providers.25 Oracle has contracted Crusoe to build out compute offerings for OpenAI as part of the Stargate project.26 Microsoft was responsible for 62 percent of CoreWeave’s 2024 revenue, while Google recently inked a deal to use CoreWeave to deliver computing resources to OpenAI.27 These interactions and overlaps all complicate the cloud ecosystem, creating new, interdependent players and novel connections among long-established entities. These new relationships could complicate existing patterns of information sharing and incident response practices, while emerging players have yet to establish long-term track records of security and reliability.  

Hyperscale cloud providers have also invested extensive resources in creating and expanding cloud offerings to support AI workloads and to provide access to AI models for their customers within cloud offerings. Examples include AWS’s managed container offerings, which Anthropic uses to execute training and inference workloads at “ultra” scale, as well as tailoring of existing services, plugins, monitoring agents, credentials, and caching features.28 AWS’s Bedrock offering provides access to several models, including Anthropic’s.29 Microsoft’s Azure managed cloud offerings monitor, orchestrate, and execute AI workloads, including inference for OpenAI’s models.30 Google Cloud’s Cloud TPU platform includes a compiler, managed software frameworks, and custom chips designed to accelerate AI workloads and is used both internally at Google and by companies like Cohere, Stability AI, and Character AI.31

Scarcity or lack of access to key computing resources specific to artificial intelligence could also drive customers to overlook security requirements, focusing instead on rapid access to essential computing power. The increasing compute demands of AI firms and the growth of niche cloud computing service companies, both intertwined with hyperscale cloud providers, will continue to strain existing compute resources such that cloud computing policy interventions run a growing risk of compromising a fragile ecosystem.  

Policymaking in this sector has largely focused on advanced semiconductors, particularly NVIDIA GPUs, as the principal component of AI compute, from the Biden administration’s AI diffusion rule to the Trump administration’s AI Action Plan.32  Proposals have also examined the challenges of securing model weights, managing the flow of advanced semiconductors used in AI training development, and acquiring energy and land needed to construct datacenters.33  

However, limited attention has focused on the risks and opportunities of cloud computing’s role in AI development and deployment, and as an essential component of the AI supply chain itself. Efforts to secure the cloud computing ecosystem can protect sensitive intellectual property involved in AI development in deployment, including model weights and proprietary details of both AI use and research methods and practices used to develop frontier AI models. Conversely, policies and security practices that hamper efforts to secure cloud computing infrastructure could jeopardize the security of AI development and deployment.  

Building trust

Trust, in this paper, refers to both the ability of cloud customers to ensure that their cloud configurations are secure from external threats and from excessive interference or access from cloud providers themselves. Quickly verifying trustworthiness after a violation is paramount for customers wanting to keep up with attackers. This section will discuss the challenge of establishing trust in cloud computing systems. Miscommunication and misalignment regarding trust have immediate consequences for cloud customers, who often bear the costs of security incidents. 

Threat intelligence from cloud security firms suggests that the pace of incidents is increasing, with a 2024 Google Cloud report finding only five days of average observed time between the disclosure and exploitation of vulnerabilities, down from 32 days in 2023.34 Another 2023 report from Orca Security found that it took only two minutes for AWS encryption keys that were publicly exposed on GitHub to be used by threat actors.35 Sophisticated attackers have targeted companies, such as Cloudflare, that specialize in cloud network infrastructure, stealing credentials to access documentation and source code.36 Advisories from cybersecurity companies and intelligence agencies indicate that organizations persistently experience breaches from sophisticated, nation-state-sponsored threat actors who utilize publicly known vulnerabilities as part of a global espionage strategy.37 Meanwhile, trust deficits that result from customers’ lack of trust in cloud providers, or an inability by cloud providers to verifiably demonstrate trustworthiness, hamper both the adoption of cloud capabilities and the ability of organizations to prevent and respond to security incidents. When trust criteria are insufficient or incomplete, preventable incidents can occur at breathtaking speed.  

In policy contexts, trust frequently centers on an entity-based definition. The National Institute of Standards and Technology (NIST) notes that trust is “a belief that an entity meets certain expectations and therefore, can be relied upon.”38 A focus on entities can lead to technology policies focused on static, easily verifiable attributes, such as the national origin or corporate headquarters of cloud providers, from which policymakers derive restrictions on specific firms or sweeping prohibitions against foreign entities. This dynamic is not exclusive to cloud computing policy and has occurred throughout national security debates over trusted technology, from Kaspersky to Huawei.39 While organizational attributes can provide useful information, requirements exclusively based on entity-based definitions of trust can overlook technical security measures and implementation details that directly affect system trustworthiness, while incentivizing the use of proxy companies and circuitous legal setups.  

Technical communities have developed alternative approaches to trust that emphasize continuous verification instead of static, binary decisions to trust or not trust a technology provider. The zero-trust security model operates on “the premise that trust is never granted implicitly but must be continually evaluated,” according to NIST.40 This model is a shift from a perimeter-based security strategy toward contextually securing and restricting access to dynamic computing resources and assets.41 As an illustrative example, a zero-trust approach would reflect a company’s decision to shift from a sign-in system to enter a building, after which each person would have complete access to move around a building, to an approach where each room or floor requires a special key that only certain people can access, regardless of whether or not the person requesting access is already within the building. However, zero-trust is more of a broad set of principles than a set of specific operational requirements and might not align with existing organizational structures and regulatory frameworks that mandate perimeter-based security approaches. 

Cryptographic and hardware-based verification mechanisms offer another path through technical, not organizational, assurances. Trusted Execution Environments (TEEs) and confidential computing could enable remote attestation of the integrity and confidentiality of data and code.42 Remote attestation and technical assurances can establish trust outside of organizational attributes but require specialized hardware and software implementations that are not currently widely available or cost-effective.43 

These divergent approaches to trust create challenges for cloud providers and customers. A coherent, cohesive approach to cloud trust must bridge different methods while accounting for the scale and complexity of cloud computing. This requires moving beyond simple analogies and one-size-fits-all policies towards frameworks that thoughtfully weigh technical and organizational attributes. The alternative is a fragmented system in which policies undermine the economic and technical benefits of cloud computing without improving security. The costs of insecurity will only grow as the cloud becomes more entwined with AI applications, making the question of ensuring trust in cloud computing increasingly critical. 

Digital sovereignty and data localization

This paper’s focus is on digital sovereignty policies that target cloud infrastructure, such as the promotion of national or local alternatives to cloud providers, the exclusion of foreign cloud providers from specific certifications or sectors, or restrictions on the structure and configuration of cloud deployments within national borders.44 This section will ground this paper’s discussion of trust and security in cloud computing and infrastructure within a contemporary policy debate: the application of digital sovereignty and data localization restrictions to cloud computing.45

In many cases, companies that qualify as hyperscalers also offer search engines, operating systems, social media sites, and ad platforms, which could also be relevant to digital sovereignty debates. Those offerings remain outside of the scope of this paper but could very well have implications for cloud computing if remedies or policies aimed at achieving digital sovereignty goals impacted hyperscale providers and their cloud offerings. 

There are at least three essential characteristics of digital sovereignty and data localization policies with direct implications for cloud computing: the affected country or region, the scope of customers affected, and the criteria for cloud trust. In addition to descriptions, each characteristic will include illustrative examples.  

Table 1: Key characteristics for digital sovereignty policies affecting cloud computing systems

Geography

The first essential characteristic is the geographic region affected by a policy. Typical examples of cloud sovereignty or digital sovereignty policies apply at a national level and are set by a federal policymaking body. 

For example, the French SecNumCloud certification scheme, which includes localization requirements and restrictions on foreign ownership of cloud providers, is in effect within France.46 Attempts to extend sovereignty policies in certification requirements across the EU within the European Union Cybersecurity Certification Scheme for Cloud Services have been unsuccessful so far, facing opposition from Denmark, Estonia, Greece, Ireland, Lithuania, Poland, Sweden, and the Netherlands.47 Outside the EU, digital sovereignty policies appear to remain national in scope, which aligns with the focus of supporters of some digital sovereignty policies in ensuring government control over and visibility into cloud services.  

Scope

Another essential characteristic is the scope of customers or procurers of cloud services affected by digital sovereignty policies. Direct government use of cloud services or use by critical infrastructure sectors like finance and defense have been a focus of digital sovereignty policies. These policies can take the form of explicit bans or prohibitions on critical sector or government use of foreign cloud providers, procurement incentives for local companies, or technical requirements that in effect mandate country or sector-specific cloud configurations. 

Several countries and geographies have experimented with sovereignty and localization requirements specific to critical infrastructure sectors or government use. The Cross Border Data Forum’s 2021 data localization report highlighted requirements for exclusive localization of financial sector information and operations, such as transactions and banking information, in several countries, including South Africa, Turkey, and India.48 The aforementioned French SecNumCloud scheme applies to government agencies and “operators of national importance.”49 South Korea’s Cloud Security Assurance Program (CSAP) applies to public sector cloud use, but debates over its provisions have suggested it could be extended to additional sectors such as healthcare and education.50 

The sensitivity of government and critical infrastructure sector data and operations raises heightened concerns regarding the risks of unauthorized access to information or disruption of services. The sheer size of the government and critical infrastructure sectors’ cloud budgets also creates an appealing policy target, as including requirements or incentives within procurement regimes serves as an intermediary between economy-wide regulations and no regulation at all. Government and critical infrastructure criteria for cloud computing are often thought to induce effects outside of their direct targets, as other companies and organizations incorporate or reference criteria used by those entities in their own cloud procurement decisions.51

Criteria for trust

The final essential characteristic of digital sovereignty policies applying to cloud infrastructure is the criteria for trust that policies reference or create. Criteria of trust can include restrictions on nationality or operational jurisdictions of cloud providers, geographic locations of cloud infrastructure, or specific technical and operational measures, such as the use of encryption or external key management. These criteria can be directly put into force through legislation or through references to external certifications or standards bodies. 

Digital sovereignty policies often seek to ensure that cloud service providers have local physical footprints. Ensuring the physical footprint of a technology provider can create a toehold for further enforcement and oversight, clarifying the obligations of cloud providers to the citizens and laws of different countries. Without a clear presence in the form of personnel or physical infrastructure in a country, it is difficult for governments to enforce regulations or to substantively hold companies accountable for abuses or violations of policy. Russia and Vietnam both adopted policies requiring local offices and representatives for technology companies, which have been described as creating opportunities for government control and coercion.52 Incentives for local data center construction, such as Brazil’s proposed package of incentives and tax breaks for developers, can alternatively focus on the potential economic benefits of localized infrastructure, from collected taxes to construction and maintenance jobs.53

Other localization requirements seek to restrict the physical location of cloud infrastructure. Proponents of data localization argue that restricting the physical location of data, including prohibiting cross-border data transfers, provides security and privacy advantages. Countries around the world have adopted localization measures applicable to various sectors, types of data, or processing requirements. Localization measures mandate restricting operations to cloud infrastructure located within certain geographic boundaries. Often, this manifests as restricting the set of cloud “regions” that companies have access to, while cloud providers recommend structuring applications to span multiple regions and availability zones.“54  Availability zones are logically isolated segments of cloud infrastructure that attempt to ensure that if one zone suffers an outage, it does not take down other zones within the same region.55 However, region-wide disruptions such as October’s AWS DynamoDB incident in the us-east-1 region, while rare, have significant impacts on both customers relying on resources within a region and cloud service providers that operate within a specific region.56

Figure 1: Region and launch year

Restricting the flow of data and information can limit access to computing and processing resources, limiting the ability of cloud providers to surge capacity and geographically distribute workloads. The ability to migrate workloads and computing assets, such as data, to other countries is essential for effective disaster recovery, which could motivate carving out backups as exempt from data localization. In preparing for Russia’s invasion, for instance, Ukraine paused localization requirements and shifted essential government data to cloud infrastructure outside of its borders to ensure availability and access in the event of the physical destruction of domestic data centers.57 Estonia has also established a data embassy, which consists of an external private cloud region in Luxembourg to ensure continuity of government operations in the event of a crisis.58

Beyond infrastructure locations, countries and customers might seek to restrict the geographic location of technical support staff and engineers, especially individuals who might access or view sensitive data. Requirements can restrict physical location, citizenship, or clearance of support personnel, which can impact the staffing strategies, create challenges for around-the-clock availability, and require duplication of expertise across nations. According to ProPublica, Microsoft worked around such restrictions from the United States Defense Department by using support structures such as “digital escorts,” where individuals in possession of security clearances but lacking technical expertise supervised engineers, including engineers physically located in China, as they interacted with cloud systems used for national security purposes.59 The impulse towards workarounds for location-based restrictions, such as the digital escort system, which Microsoft has reportedly stopped using for the Department of Defense, demonstrates the operational difficulties restrictions on the location of support staff can create and the security risks that can result from the uneven implementation of location restrictions.60

Infrastructure localization approaches can also be designed to ensure that companies or governments have local oversight and control over security measures, including the use of encryption. Keeping encryption keys off cloud provider infrastructure, and instead on local or on-premise infrastructure, can be referred to as “key escrow” or “external key management.”61 Apple has historically complied with key localization requirements in China, while Google has implemented an offering designed for compliance with a requirement in Saudi Arabia.62  These offerings may be developed in partnership with local providers, who can oversee cloud provider access to encryption keys.63 However, this approach introduces distinct risks to cloud computing systems, as customers must trust the additional provider to secure encryption keys, which, if compromised, would provide access to sensitive data. Countries can also impose other requirements relating to encryption, such as country-specific standards. South Korea’s government cloud certification requires national standard encryption algorithms that are not widely used outside of Korea.64

In the US, debates on state-sponsored, proprietary encryption standards have resulted in concerns about the intelligence community creating “backdoors,” or exploitable flaws within encryption algorithms, which could be used by intelligence agencies and malicious actors to monitor communications and access content.65 Governments could also directly restrict the ability of cloud service providers to offer products with certain encryption standards or features. The UK’s secret law enforcement request to Apple to access certain encrypted communications led Apple to withdraw its Advanced Data Protection feature from the UK market rather than create a backdoor for authorities.66  Restrictions or constraints on encryption standards and encryption system architectures can give local authorities control over access to encrypted data, but can also create vulnerabilities if they result in compromising key local management systems or mandates for insecure encryption standards. 

The jurisdictions cloud providers originate from or operate within can be a source of concern for governments, especially when other governments mandate, incentivize, or promote practices that undermine the security of underlying technology systems. Digital sovereignty policies can aim to exclude specific cloud providers or providers from certain countries, either with outright bans or structural requirements mandating local partnerships. The approach of excluding specific countries, or restricting the access of companies from certain countries, is referred to as a blacklist, while a policy that only allows transfers to specified countries is referred to as a whitelist.67

The United States has typically taken a blacklist approach to national security reviews of foreign companies, imposing a smaller, ad-hoc set of limitations on companies’ jurisdictions and origins. For example, the US government has expressed skepticism over Chinese cloud providers’ access to American information, resulting in investigations of Alibaba’s cloud business.68 These concerns include Chinese policies requiring technology companies to notify the government when they discover technical vulnerabilities and extensive cooperate with defense and intelligence services.69  In reviews of other technical systems, such as telecommunications infrastructure, the US has weighed the national security risks of the involvement of both Chinese and Russian companies.70

Meanwhile, European data protection regimes, such as the General Data Protection Regulation (GDPR), utilize a whitelist approach, requiring “adequacy decisions” to approve data transfers to certain countries.71  European leaders have raised concerns about US surveillance practices and the lack of federal privacy legislation, which has prompted regulators to revoke previous data transfer agreements.72 The dominance of US hyperscale cloud providers, domestically and abroad, has led to a close focus in policy discussions on US legislation applicable to cloud providers, including those that affect the operations of cloud providers in other jurisdictions. Concerns regarding US government access to information have led to repeated references in policy debates to one piece of legislation: the 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act.  

The CLOUD Act restated requirements of the US Stored Communications Act (SCA) as they apply to information under the control of cloud providers, including if that information is shared, sharded (splitting data into multiple, more manageable pieces), or distributed across geographic locations, but did not change the requirements for warrants under US law to access the content of electronic communications.73  The CLOUD Act’s clarification of the SCA’s scope brought the United States into compliance with the Budapest Convention on Cybercrime, while also authorizing bilateral agreements for countries to request information from cloud providers for law enforcement investigations outside of the Mutual Legal Assistance Treaty (MLAT) process.74  The EU-US Data Privacy Framework currently holds an adequacy decision, allowing individual US companies to transfer data under GDPR. However, the Trump administration’s disruption of the Privacy and Civil Liberties Oversight Board (PCLOB) and US intelligence community data collection have raised questions in Europe about the merits of the adequacy decision and could result in further legal challenges, which could remove the United States and American companies from the GDPR whitelist.75   

Concerns about the market dominance of US hyperscalers, as well as US government access to content stored on cloud computing systems, have also led to various European initiatives to foster domestic alternatives, such as the GAIA-X initiative.76  Foreign ownership restrictions contained within cloud certifications, such as the SecNumCloud regime, have led cloud providers to set up operations and joint ventures with domestic companies that manage local configurations of cloud computing. In France, for instance, Google has partnered with Thales, while Microsoft has partnered with Orange and Capgemini.77 Hyperscale cloud providers have also announced commitments to expand “sovereign” cloud regions, such as Microsoft’s partnership with a German SAP subsidiary, which will consist of “a sovereign cloud platform for the German public sector, hosted in German datacenters and operated by German personnel.”78  AWS’s sovereign cloud commitment language also highlights the physical and logical isolation of a forthcoming sovereign European cloud region, which will have “no operational control outside of EU borders.”79 These commitments and infrastructure developments require significant investments as well as a shift in operational and management strategies from the existing global distributed models.  

Table 2: Illustrative policies mapped to characteristics for digital sovereignty policies affecting cloud computing systems

Implications for cybersecurity and AI 

This multifarious tug of war over sovereignty and trust has significant implications for cloud computing infrastructure, security, and the services, including for AI. Focusing on location as a proxy for control and trust can lead to policies that ultimately undermine security goals by decreasing the reliability and integrity of essential systems. The critical nature of cloud computing means it deserves intensive evaluation to ensure the trustworthiness of foundational systems, but evaluation and assurances of trust in cloud computing should be rooted in effective guarantees. The efficiency and performance benefits of cloud computing are fractured and disrupted by location-based requirements. The replication of infrastructure, support systems, and other operational overhead creates meaningful costs for cloud providers, limiting their ability to invest in other measures that could improve performance or security. Filings by industry organizations, including the US Chamber of Commerce, prove this point by repeatedly highlighting the costs of staff and infrastructure location requirements impose on the operations of cloud providers.80

This location requirements race fragments security monitoring and threat response, limiting the ability of organizations with global footprints and technical systems to mitigate and respond to cross-border risks. National or regional silos of cloud deployments with the same underlying software and hardware deployments—all relying on core features, patterns, and architectures developed by the same handful of companies—insulate cloud deployments from legal concerns while creating technical and financial burdens.

Constraints on provider locations and jurisdictions can also limit organizations from taking full advantage of advanced global capabilities, including networking infrastructure. In 2021, for example, Portugal’s Supervisory Authority fined its public census body €4.3 million for using Cloudflare’s services, citing concerns regarding Cloudflare’s global, distributed network of servers and position as a US company.81  Despite Cloudflare’s reputation as a cost-effective and highly reliable network security provider, the ruling occurred in the wake of broader discussions on the ability of European organizations to transfer data to the United States as part of GDPR compliance.82

Moreover, the impacts of limiting access to network infrastructure are not mitigated by local datacenters and computing capacity, as organizations will still be unable to use state-of-the-art platforms that enable global communications and stronger security protections. Policies that only consider datacenter capacity and access ignore these impacts and can inadvertently create security issues while degrading service quality.

Governments should avoid imposing restrictions on access to cloud computing based exclusively on the location of cloud infrastructure. Location is at best a proxy for the security practices and guarantees of cloud providers and imposes cost and security consequences on providers. Localization requirements should, at a minimum, involve an advanced notification and blacklist approach, minimizing disruptions and operational concerns for cloud providers who build infrastructure configurations years in advance. Ad-hoc revocation should be reserved for well-documented offenders, observed compromises, and emergencies, as cloud providers, their customers, and the security ecosystem broadly benefit from stability and predictability.

Cloud security fundamentally depends upon the ability of organizations to respond to incidents rapidly at scale. Container escape vulnerabilities, which are errors in the implementation of encryption standards, or misconfigurations in the software connecting different services that expose can data and enable lateral movement, are just a few examples of cybersecurity flaws that are agnostic to the physical location of servers and support staff. If location-based requirements restrict companies’ ability to monitor, observe, and remediate incidents, or even prohibit or discourage them from retaining non-domestic cybersecurity incident response companies, organizations and governments will be cut off from the global flow of cutting-edge threat intelligence, vulnerability reports, and mitigation guidance.83

Conflicting trust frameworks can also undermine the ability of organizations to collaborate across the cloud ecosystem. Instead of working with cutting-edge providers and cybersecurity companies focused on addressing security challenges, organizations are encouraged to turn inward, reinventing the wheel by managing their own technology configurations and security postures. While these organizations have useful context for their own security risks, rapid coordination and information-sharing bolsters collective defenses in ways that are difficult to replicate. Ad-hoc grants and revocations of trust in cloud computing systems or cloud providers exacerbate these challenges, and governments should adopt frameworks for trust that allow for continuous verification and evaluation instead.

The management of cloud encryption keys and credentials is also essential to cloud security. Externalizing key management systems poses enhanced risks for the same reasons that advocates seek to localize control of encryption keys: they unlock access to otherwise secure data.84 However, removing key management from cloud provider infrastructure and placing it under the control of another provider creates additional risks, as cloud users must now trust each provider and the infrastructure or platform through which keys or identities are managed.85 Threat actors have targeted key and identity management platforms, recognizing their importance to the overall security posture of cloud customers. Okta, an identity and management company, has been the subject of repeated attacks, including a breach of its customer support portal, which initially became public because of a threat actor’s boasts on Telegram.86 Removing keys from cloud provider infrastructure does not reduce the importance of securing cryptographic information and credentials, and externalizing key management only places additional responsibility on individual customers to manage and ensure key security.

Cloud security errors and flaws cross organizational boundaries and are not prevented by distinctions between cloud providers and other companies in operating or managing infrastructure. Attackers have leveraged connections between on-premise and public cloud systems (known as hybrid cloud deployments), such as shared credentials or identity systems, to compromise and wreak havoc in cloud environments.87. In a 2023 example, a suspected Iranian threat actor used stolen credentials to move from an on-premise environment into a customer’s Azure configuration.88

Security flaws can also be similar across cloud providers, even when cloud providers separately develop features and products. The cloud security firm Wiz conducted research on the incorporation of a popular open-source database service, PostgreSQL, into cloud platforms and found similar vulnerabilities in Azure and Google Cloud, despite their independent development.89

Policymakers should not operate under the assumption that segmenting cloud infrastructure —or the oversight of cloud infrastructure —across organizations will automatically improve the cybersecurity posture of cloud configurations. By limiting the ability of companies to share information about vulnerabilities or observe threat activity across active cloud configurations, policymakers can inadvertently exacerbate the challenge of common security failures across cloud providers. The trajectory of AI development and its intense reliance on cloud resources will only exacerbate the challenges of navigating these tradeoffs. Policies that require jurisdictional independence, exclusive local legal or operational control, and partnerships with local companies incentivize configurations that are not based upon a solid foundation of technical boundaries and isolation. Artificially constraining cloud providers, mandating technology transfer, and rewarding regulatory arbitrage do nothing to advance national sovereignty objectives and incentivize lax security practices instead of proactive, systemic monitoring.

If specific government or critical infrastructure sector criteria for cloud procurement are too onerous or burdensome, they also risk artificially segmenting the cloud market, leaving public sector customers out of step with industry norms and delayed in accessing new offerings. For example, AWS’s US GovCloud region contains detailed documentation on services available in other regions that are unavailable or require distinct configurations within GovCloud.90 A US Government Accountability Office report on federal agency use of generative AI also references delays of cloud certification processes as an obstacle to access and use of new services, particularly when the companies offering them are not interested in gaining authorization through procurement processes or are unaware of federal procurement requirements.91

Critical infrastructure sectors and government agencies already shoulder cybersecurity burdens as the targets of persistent cyberattacks, with consistent ransomware attacks on hospitals as one example.92 In budget-constrained organizations, interpreting and implementing cybersecurity regulatory requirements can create cost burdens that lead to difficult tradeoffs with essential functionality.93 Policies designed to shape the cloud market broadly should carefully evaluate which sectors are impacted and to what degree. If the goal of a procurement or incentive structure is cross-sector security requirements, public entities with limited cybersecurity expertise or leverage to negotiate with hyperscale cloud providers, such as critical infrastructure operators, may not be a logical starting point.

Governments around the world have a crucial role to play in allowing cloud providers to demonstrate trustworthiness, as they can remove barriers to information sharing, harmonize international trust regimes, and demand information from providers that customers would otherwise be unable to access. Accepting and embracing this role requires a strategic focus outside of the role of governments as merely cloud procurers. While governments are essential users of the cloud, consumer protection mandates and broader security goals merit a focus on ecosystem-wide security, which should be disentangled from direct procurement capabilities. Cloud providers should be required to share cloud security indicators with governments not just as a step to securing public sector contracts, but also to verify the trustworthiness of cloud infrastructure critical to modern society.

The US can play an important role in shepherding confidential computing technology—which runs computations on isolated systems—but must also manage coordination to ensure that by the time this technology is available and trustworthy, that allies and partners have not fully pivoted to regulatory regimes that mandate fragmented cloud infrastructure. One way to assure allies and partners is to demonstrate commitments to the security of the cloud ecosystem. Where legislation like the CLOUD Act has been mis- or over-interpreted by outside entities to provide expansive authorities, law enforcement agencies should continue to clarify the scope and details of warranted access to the content or information stored by cloud providers. Through its oversight functions, the US Congress can also publicize further aggregated, anonymized, and declassified information about the nature of interactions between the intelligence community, law enforcement agencies, and cloud providers, including allowing further information sharing about national security requests.

Conclusion

As artificial intelligence demands force the evolution of cloud computing systems, policies aiming to ensure the security of cloud computing must balance the goals of visibility and control with essential capabilities. Specialized providers and the relative opacity of the AI ecosystem both make cloud computing’s role in AI more critical and fragile. As artificial intelligence workloads continue to require careful coordination across specialized providers and infrastructure, establishing clear criteria of trust in cloud computing gains urgency. The consequences of failing to establish and maintain this trust will not just be felt by organizations using the cloud to develop and deploy artificial intelligence, but by governments and companies broadly, as the cloud infrastructure they depend upon and utilize becomes fragmented and limited. 

Countries around the world have implemented and proposed policies that impose geographic or location restrictions on cloud systems, instituting organizational and operational changes for cloud providers without fully evaluating the security tradeoffs. Requirements that change the criteria for trust in cloud computing to prioritize location can silo and fragment cloud infrastructure, reducing geographic distribution that provides resilience and elasticity. Focusing the evaluation of trust instead on technical assurances, rather than geographic and organizational proxies, should be the priority of governments. The location and nationality of cloud providers, while important, are insufficient proxies for security guarantees and outcomes and ultimately serve to incentivize regulatory arbitrage and compliance over state-of-the-art security practices. 

The complexity of cloud computing—driven by scale, specialization, and demand—enables the reliable systems and technical innovations that define modern economies and ways of life—and that is why that policies and regulations in this sector need to be finely-tuned and informed by technical realities. Interventions that aim to manage this complexity by tearing apart infrastructure and segmenting it within geographic borders will only end up undermining these systems and their security without fulfilling national security goals.  

There is no doubt this is a tall task. But only strategies as nuanced as the technology itself can safeguard its advantages while establishing the foundational trust that will underpin the future of artificial intelligence and technological innovation.  

About the author

Sara Ann Brackett is an assistant director with the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. She focuses her work on open-source software security, software bills of materials, software liability, and software supply-chain risk management within the Cyber Statecraft Initiative’s cybersecurity and policy portfolio.

Brackett graduated from Duke University, where she majored in computer science and public policy and wrote a thesis on the effects of market concentration on cybersecurity. She participated in the Duke Tech Policy Lab’s Platform Accountability Project and worked with the Duke Cybersecurity Leadership Program as part of Professor David Hoffman’s research team.

Acknowledgements

The author would like to thank Trey Herr, Stewart Scott, Nitansha Bansal, Kemba Walden, Devin Lynch, Justin Sherman, Dominika Kunertova, and Joe Jarnecki for their comments on earlier drafts of this report, as well as all the individuals who participated in background and Chatham House Rule discussions about issues related to data, AI applications, and the concept of an AI supply chain. 

Explore the program

The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Tianjiu Zuo, Justin Sherman, Maia Hamin, and Stewart Scott, Critical Infrastructure and the Cloud: Policy for Emerging Risk, Atlantic Council, July 10, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/critical-infrastructure-and-the-cloud-policy-for-emerging-risk/.
2    Lily Hay Newman, “What the Huge AWS Outage Reveals About the Internet,” Wired, October 20, 2025, https://www.wired.com/story/what-that-huge-aws-outage-reveals-about-the-internet/.
3    “Winning the Race: America’s AI Action Plan,” Executive Office of the President of the United States, July 23, 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf; “Global AI Governance Action Plan,” Ministry of Foreign Affairs of the People’s Republic of China,” July 26, 2025, https://www.fmprc.gov.cn/eng./xw/zyxw/202507/t20250729_11679232.html; “The AI Continent Action Plan: Shaping Europe’s Digital Future,” Commission to the European Parliament, April 9, 2025, https://digital-strategy.ec.europa.eu/en/library/ai-continent-action-plan.
4    “Top Threats to Cloud Computing 2024,” Cloud Security Alliance: Top Threats Working Group, August 5, 2024, https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-2024.
5    An Outage Strikes: Assessing the Global Impact of CrowdStrike’s Faulty Software Update, 118th Cong. (2024) (written testimony of Adam Meyers, Senior Vice President, Counter Adversary Operations, CrowdStrike), https://homeland.house.gov/wp-content/uploads/2024/09/2024-09-24-HRG-CIP-Testimony-Meyers.pdf
6    For coverage of another element of this supply chain, data, see Justin Sherman’s Securing data in the AI supply chain.
7    Bar Kaduri and Tohar Braun, “2023 Honeypotting in the Cloud Report: Attacker Tactics and Techniques Revealed,” Orca Security, 2023, https://orca.security/lp/sp/ty-content-download-2023-honeypotting-cloud-report/.
8    “Emerging Threats: Cybersecurity Forecast 2025,” Google Cloud Security, November 13, 2024, https://www.gstatic.com/gumdrop/files/cybersecurity-forecast-2025.pdf.
9    “Cloud Attack Retrospective: 8 Common Threats to Watch for in 2025,” Wiz, June 18, 2025, https://www.wiz.io/reports/cloud-attack-report-2025.
10    Peter Mell and Timothy Grance, “NIST Special Publication 800-145: The NIST Definition of Cloud Computing, National Institute of Standards and Technology, September 2011, https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-145.pdf.
11    Ashwin Chaudhary, “What is Cloud Workload in Cloud Computing,” Cloud Security Alliance, November 13, 2024, https://cloudsecurityalliance.org/blog/2024/11/13/what-is-cloud-workload-in-cloud-computing.
12    Rolf Harms and Michael Yamartino, “The Economics of the Cloud,” Microsoft, November 2010, https://news.microsoft.com/download/archived/presskits/cloud/docs/The-Economics-of-the-Cloud.pdf.
13    Wenqi Jiang et al., “Data Processing with FPGAs on Modern Architectures,” Companion of the 2023 International Conference on Management of Data, June 4, 2023, 77–82, https://doi.org/10.1145/3555041.3589410.
14    “Gartner Says Worldwide IaaS Public Cloud Services Market Grew 22.5% in 2024,” Gartner, August 6, 2025, https://www.gartner.com/en/newsroom/press-releases/2025-08-06-gartner-says-worldwide-iaas-public-cloud-services-market-grew-22-point-5-percent-in-2024.
15    Fay Chang et al., “Bigtable: A Distributed Storage System for Structured Data,” ACM Trans. Comput. Syst. 26, no. 2 (2008): 1-26, https://doi.org/10.1145/1365815.1365816.
16    Aditya Ramakrishnan, “Under the hood: Amazon EKS ultra scale clusters,” Amazon Web Services, July 16, 2025, https://aws.amazon.com/blogs/containers/under-the-hood-amazon-eks-ultra-scale-clusters/.
17    David Kuo, “Scaling on AWS Part I: A Primer,” AWS Startups Blog, November 25, 2015, https://aws.amazon.com/blogs/startups/scaling-on-aws-part-1-a-primer/.
18    “What is elastic computing or cloud elasticity,” Microsoft, accessed September 4, 2025, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-elastic-computing.
19    “What is cloud architecture?,” Google Cloud, accessed September 4, 2025, https://cloud.google.com/learn/what-is-cloud-architecture?hl=en.
20    Vijay Kumar Adhikari et al., “Unreeling Netflix: Understanding and Improving Multi-CDN Movie Delivery,” Princeton Computer Science, accessed September 4, 2025, https://www.cs.princeton.edu/courses/archive/fall16/cos561/papers/NetFlix12.pdf.
21    “What is a data center?” Cloudflare, accessed September 4, 2025, https://www.cloudflare.com/learning/cdn/glossary/data-center/.
22    Shweta Jain, “What’s in a Name? Understanding the Google Cloud Network ‘Edge’,” Google Cloud, February 22, 2021, https://cloud.google.com/blog/products/networking/understanding-google-cloud-network-edge-points.
23    “Cloudflare Security Architecture,” Cloudflare Docs, accessed September 4, 2025, https://developers.cloudflare.com/reference-architecture/architectures/security/.
24    “Can a $9bn deal sustain CoreWeave’s stunning growth?” The Economist, July 10, 2025, https://www.economist.com/business/2025/07/10/can-a-9bn-deal-sustain-coreweaves-stunning-growth.
25    Asa Fitch, “Nvidia Ruffles Tech Giants With Move Into Cloud Computing,” The Wall Street Journal, June 25, 2025, https://www.wsj.com/tech/ai/nvidia-dgx-cloud-computing-28c49748; Matt Rowe, “NVIDIA starts to invest in cloud computing,” Due, July 1, 2025, https://www.nasdaq.com/articles/nvidia-starts-invest-cloud-computing.
26    Stephen Nellis and Anna Tong, “Behind $500 billion AI data center plan, US startups jockey with tech giants,” Reuters, January 23, 2025, https://www.reuters.com/technology/artificial-intelligence/behind-500-billion-ai-data-center-plan-us-startups-jockey-with-tech-giants-2025-01-23/.
27    Krystal Hu and Kenrick Cai, “CoreWeave to offer compute capacity in Google’s new cloud deal with OpenAI, sources say,” Reuters, June 11, 2025, https://www.reuters.com/business/coreweave-offer-compute-capacity-googles-new-cloud-deal-with-openai-sources-say-2025-06-11/.
28    Ramakrishnan, “Under the hood.”
29    “Amazon Bedrock,” Amazon Web Services, last accessed September 4, 2025, https://aws.amazon.com/bedrock/.
30    Zachary Cavanell, “Meet the Supercomputer that runs ChatGPT, Sora & DeepSeek on Azure (feat. Mark Russinovich),” Microsoft Mechanics Blog, June 5, 2025, https://techcommunity.microsoft.com/blog/microsoftmechanicsblog/meet-the-supercomputer-that-runs-chatgpt-sora–deepseek-on-azure-feat-mark-russi/4418808.
31    Alex Spiridonov and Gang Ji, “Cloud TPU v5e accelerates large-scale AI inference,” Google Cloud, August 31, 2023, https://cloud.google.com/blog/products/compute/how-cloud-tpu-v5e-accelerates-large-scale-ai-inference; Nisha Mariam Johnson and Andi Gavrilescu, “How to scale AI training to up to tens of thousands of Cloud TPU chips with Multislice,” Google Cloud, August 31, 2023, https://cloud.google.com/blog/products/compute/using-cloud-tpu-multislice-to-scale-ai-workloads; Joanna Yoo and Vaibhav Singh, “How Cohere is accelerating language model training with Google Cloud TPUs,” Google Cloud, July 27, 2022, https://cloud.google.com/blog/products/ai-machine-learning/accelerating-language-model-training-with-cohere-and-google-cloud-tpus.
32    “FACT SHEET: Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence,” Executive Office of the President, January 13, 2025, https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2025/01/13/fact-sheet-ensuring-u-s-security-and-economic-strength-in-the-age-of-artificial-intelligence/; “America’s Action Plan, Executive Office of the President, July 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
33    Sella Nevo,  et al., “Securing AI Model Weights,” RAND, May 30, 2024, https://www.rand.org/pubs/research_reports/RRA2849-1.html; Janet Egan, “Global Compute and National Security,” Center for a New American Security, July 29, 2025, https://www.cnas.org/publications/reports/global-compute-and-national-security; Tim Fist, Arnab Datta, and Brian Potter, “”Compute in America: Building the Next Generation of AI Infrastructure at Home,” June 10, 2024, https://ifp.org/compute-in-america/#part-ii-how-to-build-the-future-of-ai-in-the-united-states.
34    “Cybersecurity Forecast 2026 report,” Google Cloud,  https://cloud.google.com/security/resources/cybersecurity-forecast?hl=en.
35    “2023 Honeypotting in the Cloud Report,” Orca Security, https://orca.security/lp/sp/ty-content-download-2023-honeypotting-cloud-report/.
36    Matthew Prince, John Graham-Cumming, Grant Bourzikas, “Thanksgiving 2023 security incident,” Cloudflare Blog, February 2, 2024,  https://blog.cloudflare.com/thanksgiving-2023-security-incident/.
37    “Countering Chinese State-Sponsored Actors Compromise of Networks Worldwide to Feed Global Espionage System,” Cybersecurity and Infrastructure Security Agency (CISA), September 03, 2025, https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a.
38    “Glossary: Trust,” National Institute of Standards and Technology, Computer Security Resource Center, https://csrc.nist.gov/glossary/term/trust.
39    Joe Warminsky, “Kaspersky Added to FCC List That Bans Huawei, ZTE from US Networks,” CyberScoop, March 25, 2022, https://cyberscoop.com/kaspersky-fcc-covered-list/.
40    Scott Rose et al., NIST Special Publication 800-207: Zero Trust Architecture, August 2020, https://nvlpubs.nist.gov/nistpubs/specialpublications/NIST.SP.800-207.pdf.
41    “Zero Trust Maturity Model,” CISA, January 2022, https://www.cisa.gov/zero-trust-maturity-model.
42    Suzanne Ambiel, “The Case for Confidential Computing,” The Linux Foundation https://www.linuxfoundation.org/hubfs/Research%20Reports/TheCaseforConfidentialComputing_071124.pdf?hsLang=en; “Trusted Execution Environment (TEE),” Microsoft Learn, accessed September 4, 2025, https://learn.microsoft.com/en-us/azure/confidential-computing/trusted-execution-environment.
43    “A Technical Analysis of Confidential Computing,” Confidential Computing Consortium, November 2022, https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf.
44    Max Von Thun, “Cloud computing is too important to be left to the Big Three,” The Financial Times, May 26, 2025, https://www.ft.com/content/5c930686-9119-402d-8b9b-4c3f6233164e; “Navigating Digital Sovereignty and its Impact on the Internet,” The Internet Society, December 2022,https://www.internetsociety.org/wp-content/uploads/2022/11/Digital-Sovereignty.pdf; Laurens Cerulus, “France wants cyber rule to curb US access to EU data,” Politico, September 13, 2021, https://www.politico.eu/article/france-wants-cyber-rules-to-stop-us-data-access-in-europe/; “Towards a next generation cloud for Europe,” European Commission, October 15, 2020, https://digital-strategy.ec.europa.eu/en/news/towards-next-generation-cloud-europe; Tony Roberts and Marjoke Oosterom, “Digital authoritarianism: a systematic literature review,” Information Technology for Development (2024): 1-25, https://www.tandfonline.com/doi/full/10.1080/02681102.2024.2425352?af=R#d1e129 https://www.internetsociety.org/wp-content/uploads/2022/11/Digital-Sovereignty.pdf.
45    “Navigating Digital Sovereignty and its Impact on the Internet.”
46    Georgia Wood and James Lewis, “The CLOUD Act and Transatlantic Trust,” Center for Strategic and International Studies, March 29, 2023, https://www.csis.org/analysis/cloud-act-and-transatlantic-trust; Frances G. Burwell and Kenneth Propp, Digital Sovereignty in Practice: The EU’s Push to Shape the New Global Economy, Atlantic Council, October 2022, https://www.atlanticcouncil.org/wp-content/uploads/2022/11/Digital-sovereignty-in-practice-The-EUs-push-to-shape-the-new-global-economy_.pdf
47    Meredith Broadbent, “The European Cybersecurity Certification Scheme for Cloud Services,” Center for Strategic and International Studies (CSIS), September 1, 2023, https://www.csis.org/analysis/european-cybersecurity-certification-scheme-cloud-services.
48    Nigel Cory and Luke Dascoli, “How Barriers to Cross-Border Data Flows Are Spreading Globally, What They Cost, and How to Address Them,” Information Technology and Innovation Foundation, July 19, 2021, https://itif.org/publications/2021/07/19/how-barriers-cross-border-data-flows-are-spreading-globally-what-they-cost/.
49    Broadbent, “The European Cybersecurity Certification Scheme for Cloud Services.”
50    “South Korea’s Cloud Service Restrictions,” Information Technology and Innovation Foundation, August 26, 2025, https://itif.org/publications/2025/05/25/south-korea-cloud-service-restrictions/
51    James Andrew Lewis and Julia Brock, “Faster in the Cloud,” CSIS, January 16, 2025, https://www.csis.org/analysis/faster-cloud-federal-use-cloud-services
52    Justin Sherman, “The Kremlin May Make Foreign Internet Companies Open Offices in Russia,” Slate, February 8, 2021, https://slate.com/technology/2021/02/russia-kremlin-internet-controls-foreign-companies-offices.html.
53    Lais Martins, “Brazil is Handing Out Generous Incentives for Data Centers, But What it Stands to Gain is Still Unclear,” Tech Policy Press, May 22, 2025, https://www.techpolicy.press/brazil-is-handing-out-generous-incentives-for-data-centers-but-what-it-stands-to-gain-from-it-is-still-unclear/.
54    Regions and zones,” Google Cloud, accessed September 4, 2025, https://docs.cloud.google.com/compute/docs/regions-zones.
55    “Cloud Locations,” Google Cloud, last accessed September 4, 2025, https://cloud.google.com/about/locations.
56    “Summary of the Amazon DynamoDB Service Disruption in the Northern Virginia (US-EAST-1) Region,” Amazon Web Services, accessed September 4, 2025, https://aws.amazon.com/message/101925/.
57    Catherine Stupp, “Ukraine Has Begun Moving Sensitive Data Outside Its Borders,” The Wall Street Journal, June 14, 2022, https://www.wsj.com/articles/ukraine-has-begun-moving-sensitive-data-outside-its-borders-11655199002
58    “e-Governance,” e-Estonia, last accessed September 4, 2025, https://e-estonia.com/solutions/e-governance/data-embassy/.
59    Renee Dudley and Doris Burke, “A Little-Known Microsoft Program Could Expose the Defense Department to Chinese Hackers,” Pro Publica, July 15, 2025, https://www.propublica.org/article/microsoft-digital-escorts-pentagon-defense-department-china-hackers.
60    Renee Dudley, “Microsoft Says It Has Stopped Using China-Based Engineers to Support Defense Department Computer Systems,” Pro Publica, July 18, 2025, https://www.propublica.org/article/defense-department-pentagon-microsoft-digital-escort-china.
61    Anton Chuvakin and Il-Sung Lee, “The cloud trust paradox: 3 scenarios where keeping encryption keys off the cloud may be necessary,” Google Cloud, February 2, 2021, https://cloud.google.com/blog/products/identity-security/3-scenarios-where-keeping-encryption-keys-off-the-cloud-may-be-necessary.
62    Tom McKay, “Apple Moves Chinese iCloud Encryption Keys to China, Worrying Privacy Advocates,” Gizmodo, February 25, 2018, https://gizmodo.com/apple-moves-chinese-icloud-encryption-keys-to-china-wo-1823312628; Archana Ramamoorthy and Bader Almadi, “Google Cloud expands services in Saudi Arabia, delivering enhanced data sovereignty and AI capabilities,” Google Cloud, August 19, 2024, https://cloud.google.com/blog/products/identity-security/google-cloud-expands-services-in-saudi-arabia-delivering-enhanced-data-sovereignty-and-ai-capabilities.
63    Il-Sung Lee, “Use third-party keys in the cloud with Cloud External Key Manager, now beta,” Google Cloud, December 17, 2019, https://cloud.google.com/blog/products/identity-security/cloud-external-key-manager-now-in-beta.
64    Esperanza Jelalian, “Subject: Public Consultation on Amendments to Korea’s Cloud Security Assurance Program (CSAP),” US-Korea Business Council, US Chamber of Commerce, February 9, 2023, https://www.uschamber.com/assets/documents/U.S.-Chamber_USKBC_CSAP-Letter-to-MSIT-02-09-2023.pdf.
65    Bryan H. Choi, “NIST’s Software Un-Standards,” Lawfare, March 7, 2024, https://www.lawfaremedia.org/article/nist’s-software-un-standards.
66    Zoe Kleinman, “UK backs down in Apple privacy row, US says,” BBC News, August 19, 2025, https://www.bbc.com/news/articles/cdj2m3rrk74o.
67    Kenneth Propp, Who’s a national security risk? The changing transatlantic geopolitics of data transfers, Atlantic Council, May 29, 2024, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/whos-a-national-security-risk-geopolitics-of-data-transfers/.
68    Alexandra Alper, “Exclusive: U.S. examining Alibaba’s cloud unit for national security risks – sources,” Reuters, January 19, 2022, https://www.reuters.com/technology/exclusive-us-examining-alibabas-cloud-unit-national-security-risks-sources-2022-01-18/.
69    Peter Harrell, “Managing the Risks of China’s Access to U.S. Data and Control of Software and Connected Technology,” Carnegie Endowment for International Peace, January 30, 2025, https://carnegieendowment.org/research/2025/01/managing-the-risks-of-chinas-access-to-us-data-and-control-of-software-and-connected-technology?lang=en.
70    Communications Networks Safety and Security, US Senate Committee on Commerce, Science, and Transportiation, Subcommittee on Communications, Media, and Broadband, 117th Cong. (2024) (written statement of Justin Sherman, Nonresident Senior Fellow, Atlantic Council Cyber Statecraft Initiative), https://www.commerce.senate.gov/services/files/9D566360-52C7-4FB9-B30B-1A5A86B9A69E.
71    Propp, Who’s a national security risk?
72    Caitlin Fennessy, “The ‘Schrems II’ decision: EU-US data transfers in question,” IAPP, July 16, 2020, https://iapp.org/news/a/the-schrems-ii-decision-eu-us-data-transfers-in-question.
73    “Promoting Public Safety, Privacy, and the Rule of Law Around the World: The Purpose and Impact of the CLOUD Act,” US Department of Justice, April 2019, https://www.justice.gov/criminal/media/999601/dl?inline.
74    “Deputy Assistant Attorney General Richard W. Downing Delivers Remarks at the 5th German-American Data Protection Day on ‘What the U.S. Cloud Act Does and Does Not Do,’” US Department of Justice, May 16, 2019, https://www.justice.gov/archives/opa/speech/deputy-assistant-attorney-general-richard-w-downing-delivers-remarks-5th-german-american.
75    Brian Hengesbaugh and Lukas Feiler, “How could Trump administration actions affect the EU-US Data Privacy Framework?” IAPP, February 26, 2025, https://iapp.org/news/a/how-could-trump-administration-actions-affect-the-eu-u-s-data-privacy-framework-; “Joint answer given by Mr. McGrath on behalf of the European Commission,” European Parliament, April 14, 2025, https://www.europarl.europa.eu/doceo/document/E-10-2025-000520-ASW_EN.html.
76    Mark Scott and Francesco Bonfiglio, “Why Europe’s Cloud Ambitions Have Failed,” AI Now Institute, October 15, 2024, https://ainowinstitute.org/publications/xi-why-europes-cloud-ambitions-have-failed.
77    “’Cloud de Confiance’ leader,” s3ns, accessed September 4, 2025, https://www.s3ns.io/en; Judson Althoff, “Announcing comprehensive sovereign solutions empowering European organizations,” Microsoft, June 16, 2025, https://blogs.microsoft.com/blog/2025/06/16/announcing-comprehensive-sovereign-solutions-empowering-european-organizations/.
78    Brad Smith, “Microsoft announces new European digital commitments,” Microsoft, April 30, 2025, https://blogs.microsoft.com/on-the-issues/2025/04/30/european-digital-commitments/.
79    Colm MacCarthaign, “Establishing a European trust service provider for the AWS European Sovereign Cloud,” Amazon Web Services, July 10, 2025, https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/.
80    Jelalian, “Subject: Public Consultation on Amendments to Korea’s Cloud Security Assurance Program (CSAP).”
81    “The Portuguese Supervisory Authority fines the Portuguese National Statistics Institute (INE) 4.3 million EUR,” European Data Protection Board, December 19, 2022, https://www.edpb.europa.eu/news/national-news/2022/portuguese-supervisory-authority-fines-portuguese-national-statistics_en.
82    Hendrik Mildebrath, “The CJEU judgment in the Schrems II case,” European Parliament, September 2020, https://www.europarl.europa.eu/RegData/etudes/ATAG/2020/652073/EPRS_ATA(2020)652073_EN.pdf
83    Peter Swire and DeBrae Kennedy-Mayo, “The Risks to Cybersecurity from Data Localization – Organizational Effects,” 8 Arizona Law Journal of Emerging Technologies 3 (June 2025), https://doi.org/10.2139/ssrn.4030905.
84    Anton Chuvakin and Honna Segel, “Unlocking the mystery of stronger security key management,” Google Cloud, December 21, 2020, https://cloud.google.com/blog/products/identity-security/better-encrypt-your-security-keys-in-google-cloud.
85    “Cloud External Key Manager,” Google Cloud, accessed September 4, 2025, https://cloud.google.com/kms/docs/ekm#considerations; Chuvakin and Segel, “Unlocking the mystery of stronger security key management;” “Use Secure Cloud Key Management Practices,” US National Security Agency, CISA, March 7, 2024, https://media.defense.gov/2024/Mar/07/2003407858/-1/-1/0/CSI-CloudTop10-Key-Management.PDF.
86    Jonathan Greig, “Okta security breach affected all customer support system users,” The Record, November 29, 2023, https://therecord.media/okta-security-breach-all-support-users; Jonathan Grieg, “Okta apologizes for waiting two months to notify customers of Lapsus$ breach,” The Record, March 27, 2022, https://therecord.media/okta-apologizes-for-waiting-two-months-to-notify-customers-of-lapsus-breach.
87    Lior Sonntag, “Bridging the Security Gap: Mitigating Lateral Movement Risks from On-Premises to Cloud Environments,” Wiz, May 25, 2023, https://www.wiz.io/blog/lateral-movement-risks-in-the-cloud-and-how-to-prevent-them-part-4-from-compromis
88    MERCURY and DEV-1084: Destructive attack on hybrid environment,” Microsoft Threat Intelligence, April 7, 2023, https://www.microsoft.com/en-us/security/blog/2023/04/07/mercury-and-dev-1084-destructive-attack-on-hybrid-environment/.
89    Ronen Shustin, Shir Tamari, Nir Ohfeld, and Sagi Tzadik, “The cloud has an isolation problem: PostgreSQL vulnerabilities affect multiple cloud vendors,” Wiz, August 11, 2022, https://www.wiz.io/blog/the-cloud-has-an-isolation-problem-postgresql-vulnerabilities.
90    “Services in AWS GovCloud (US) Regions,” Amazon Web Services, accessed September 4, 2025, https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-services.html.
91    “Artificial Intelligence: Generative AI Use and Management at Federal Agencies,” US Government Accountability Office, July 29, 2025, https://files.gao.gov/reports/GAO-25-107653/index.html.
92    “Healthcare and Public Health Sector,” CISA, accessed September 4, 2025, https://www.cisa.gov/stopransomware/healthcare-and-public-health-sector.
93    Ashley Thompson, “Re: Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) Reporting Requirements,” American Hospital Association, July 3, 2024, https://www.aha.org/lettercomment/2024-07-02-aha-responds-cisa-proposed-rule-cyber-incident-reporting-requirements.

The post Cloudbusting: Policy for evaluating trust in compute infrastructure appeared first on Atlantic Council.

]]>
The G20 is moving forward on global AI governance—and the US risks being left out https://www.atlanticcouncil.org/blogs/new-atlanticist/the-g20-is-moving-forward-on-global-ai-governance-and-the-us-risks-being-left-out/ Tue, 02 Dec 2025 13:07:25 +0000 https://www.atlanticcouncil.org/?p=890515 The leaders’ declaration adopted at the recent Group of Twenty Summit in South Africa offers a new vision of the future of artificial intelligence.

The post The G20 is moving forward on global AI governance—and the US risks being left out appeared first on Atlantic Council.

]]>
Something notable happened in Johannesburg late last month, although it drew limited attention in Washington: Many of the world’s major economies signaled a growing alignment around how artificial intelligence (AI) and data should be approached—not primarily as instruments of geopolitical competition, but as vehicles for inclusive and sustainable development. The Group of Twenty (G20) leaders’ declaration, adopted despite uneven participation among several countries, reflects an important shift in how states are positioning themselves on AI governance. It offers a snapshot of an emerging global conversation that increasingly links AI to development goals and digital equity.

And the United States was not part of that moment.

The US delegation did not attend the Johannesburg summit and declined to join the declaration—a decision that stemmed in part from concerns about the host nation and broader disagreements with aspects of the process. And the United States is making AI a focus of the G20 summit it is hosting next year, an indication that it has not ruled out collaboration. Still, this year’s absence carried symbolic weight. It suggested a narrowing US appetite to engage multilaterally at a time when many governments are moving quickly to shape the rules and norms surrounding transformative technologies. Other capitals may reasonably interpret this as an opening: If Washington steps back from these discussions, others will step forward.  

And many did.

The G20’s digital agenda this year went further than any previous summit in knitting together AI governance with sustainable development. What emerged from Johannesburg was a clear premise: AI is not just a commercial or security asset; it is a public good, one that must be governed collectively. Countries from South Africa to Brazil to India insisted that data governance, ethical guidelines, and inclusive digital infrastructure are not luxuries—they are developmental necessities.

What came out of Johannesburg wasn’t the usual tech-salon optimism or Western policy jargon. It was the voice of a world determined to stop the next wave of innovation from hard-wiring the injustices of the last. For example, the declaration insisted that AI must be “human-centered” and “development-oriented,” backed by trustworthy data governance—not just for privacy, but as the backbone of equitable AI. It called for digital public infrastructure and real capacity-building for countries long pushed to the margins of the digital economy. And it linked information integrity directly to democratic resilience. It aligned itself with the United Nations Educational, Scientific, and Cultural Organization’s (UNESCO’s) ethical AI framework and the United Nations’ resolutions on equitable technology.

Call it whatever you want: multilateralism, solidarity, or simple common sense. But the message was unambiguous. A broad group of the world’s largest economies came together to say that AI must serve humanity, not just the handful of companies and countries capable of building it.

The United States still has avenues to re-engage—not by dictating outcomes, but by participating as a genuine partner.

What makes the US absence so striking is that for decades it was the United States that championed precisely these kinds of conversations. US diplomats helped build the global internet governance system through international multilateral and multistakeholder fora, such as the Internet Governance Forum. American civil society was instrumental in pushing human rights into digital debates. American universities trained the researchers shaping AI ethics. Yet today, as major economies explore AI’s developmental dimensions, the United States is largely outside the room.

The US administration’s current approach to AI—marked by a preference for domestic industrial strategy and selective bilateral partnerships—reflects a hardening belief that multilateral governance is either futile or dangerous. In too many parts of Washington, there is a sense that global cooperation simply helps China; that multilateral institutions dilute US influence; and that if the United States leads on innovation, it doesn’t need to lead on rules.

This is a profound misreading of how power works in the digital age.

It is true, of course, that the United States remains the world’s AI frontrunner. Its companies build the most advanced models and its research institutions are unmatched—at least for the time being. But technological dominance without normative influence is brittle. Governance frameworks—data standards, safety norms, ethics principles—shape markets and behavior as much as silicon and compute. If the rest of the world agrees on a vision for AI grounded in development, inclusion, and human rights, and the United States is not part of that process, then Washington risks becoming a rule-taker rather than a rule-maker.

Some observers are already calling Johannesburg a win for China. There is some truth to that. Beijing has long argued that developing countries deserve a larger voice in global tech governance, with Chinese President Xi Jinping criticizing the idea of AI as a “game of rich nations,” a theme emphasized in Chinese state media coverage. And China’s investments in digital infrastructure across the Global South give it clear geopolitical advantages. With Washington absent, Beijing’s narrative—centered on equity, development, and multilateral dialogue—faces fewer obstacles.

But focusing solely on China misses the bigger story. Johannesburg was not a Chinese diplomatic triumph. It was a Global South diplomatic triumph. India, Brazil, South Africa, Indonesia, and others played central roles in shaping the digital agenda. They were not passive recipients of a Chinese vision; they were co-authors of something genuinely new: a multilateral AI framework that reflects their own developmental priorities. This agency was highlighted not only in the declaration but also in the reporting across the Global South, including South Africa’s official summit briefings.

None of this means the United States has been written off as an ally. But it does reflect a growing impatience among other states. Adopting the declaration without US support was not a rebuke; it was a recognition that global cooperation cannot wait for universal participation. A generation ago, such a move would have been unlikely. Today, it feels increasingly normal.

What should worry Washington most is that this shift comes at the precise moment when AI is beginning to reshape the global economy in ways as profound as industrialization. The International Monetary Fund estimates that AI could boost global growth by nearly a full percentage point, transforming labor markets, education, healthcare, and agriculture. It could concentrate power or democratize it. And the rules that govern these transformations are being written now.

To be clear, G20 declarations are nonbinding and often aspirational. Implementation will depend on infrastructure, innovation ecosystems, and the particular needs of member states. Still, the fact that the Johannesburg declaration so explicitly anchors AI within the sustainable development agenda—at a moment when US alignment with that agenda is often questioned—signals a meaningful shift in global positioning.

By staying home, the United States is making a bet that it can shape these norms later, through market dominance alone. But history suggests otherwise. Governance norms, once set, are sticky. They embed themselves in institutions, standards, and expectations. They shape how technologies are built and how they spread. And they rarely bend to accommodate a latecomer—even a powerful one. 

It is telling that, while the world was forging a collective path in Johannesburg, Washington was charting a very different course at home with the launch of the Genesis Mission—an ambitious drive to harness AI for domestic innovation and national competitiveness. It’s a bold investment, but one that risks reinforcing a US approach to AI that is inward-looking and self-referential at the very moment the rest of the world is moving toward shared governance and collective benefit.

But retreat is not destiny. The United States still has avenues to re-engage—not by dictating outcomes, but by participating as a genuine partner. The G20 declaration did not emerge in a vacuum; it builds on existing foundations the United States helped create, including the Group of Seven’s Hiroshima AI principles and the Organisation for Economic Co-operation and Development’s (OECD’s) AI framework. Those earlier initiatives emphasized trustworthy, rights-based AI—but they lacked a deep developmental dimension. Johannesburg extends the trajectory, integrating ethical safeguards with the practical realities of inclusion and infrastructure.

If Washington wants to regain its normative footing, it can start by showing up. The upcoming India AI Impact Summit in February 2026—already gaining momentum as a convening of Global South digital priorities—offers a stage where the United States can listen rather than lecture, and even align itself with the developmental intent now shaping global AI norms. And with the United States set to host the G20 next year, it has a rare chance to reset: to bring the existing principles into conversation with the Johannesburg framework rather than treating them as competing visions.

The choice ahead is not between US power and multilateral governance. It is whether the United States can recognize that power now depends on multilateral governance—on shaping shared norms, not merely exporting products. Much of the world has signaled that AI must be human-centered, equitable, and globally accessible. The question is whether Washington is willing to take its seat—not at the head of the table, but at the table at all.


Konstantinos Komaitis, PhD, is a resident senior fellow with the Atlantic Council’s Democracy + Tech Initiative at the Digital Forensic Research Lab (DFRLab).

The post The G20 is moving forward on global AI governance—and the US risks being left out appeared first on Atlantic Council.

]]>
Safety should be front and center in India’s vision for its AI Impact Summit https://www.atlanticcouncil.org/blogs/new-atlanticist/safety-should-be-front-and-center-in-indias-vision-for-its-ai-impact-summit/ Mon, 24 Nov 2025 16:01:19 +0000 https://www.atlanticcouncil.org/?p=889506 Despite the headline attention on impact, safety needs to be fundamental to India’s vision for artificial intelligence that engenders trust, inclusion, and empowerment.

The post Safety should be front and center in India’s vision for its AI Impact Summit appeared first on Atlantic Council.

]]>
Almost two years ago, more than 150 government officials, industry leaders, and academics met at Bletchley Park, the English estate where Allied forces broke the Nazis’ Enigma Code in World War II. This meeting, the 2023 AI Safety Summit, concluded with a warning from the more than two dozen countries represented: artificial intelligence (AI) held the “potential for serious, even catastrophic, harm, either deliberate or unintentional.” The participants also agreed to meet again, and summits in Seoul and Paris followed. 

In February 2026, the next such summit will take place in New Delhi, India. But while the earlier gathering in the United Kingdom billed itself as concerning AI safety, India has opted for AI “impact.” As I noted in an analysis of this past February’s AI Action Summit in Paris, “the commitment, resources, and priorities of the host determine the summit’s successes and failures, as well as the level of buy-in from its guests.” So, as the contours of India’s goals for its AI Impact Summit come into focus, what should the participants and the wider world expect in New Delhi?

Why “impact”?

New Delhi’s challenge for the summit resembles the three-body problem—in this case, the three competing forces are political momentum, stakeholder consensus, and on-the-ground implementation. The task here is to keep all three in motion without losing coherence. Many initiatives have spun out of orbit at this stage, when lofty consensus gives way to the hard gravity of real commitments.

Superimposed upon this drive for “impact” are the specific challenges for countries that cannot afford to blitzkrieg their way into AI dominance. Their challenge is not a lack of ambition but the limits of scale, resources, and infrastructure, all while the global narrative around AI as a general-purpose technology grows louder. 

India’s leadership team for the summit seems to feel a sense of urgency. In September, Shri S. Krishnan, secretary of the Indian Ministry of Electronics and Information Technology, said in a speech, “This particular wave of technology, driven by AI . . . is probably the last opportunity that countries of the Global South, including India, have to truly grow rich and prosperous before they grow old. This is a wave the Global South has to ride.” 

Ahead of the summit, India released seven “chakras,” or “axes,” that will be discussed at the gathering. While these chakras cast a wide net, most are largely global coordination problems: human capital, social empowerment, inclusive growth, innovation and research, and safe and trusted AI. India’s vision for impact therefore is twofold: both to maintain momentum by driving action on agenda items for global coordination, and to highlight equitable access to AI infrastructure as essential to developing countries’ ability to meaningfully participate.

India’s approach

The AI Impact Summit framework also carries the hallmarks of New Delhi’s techno-legal approach to digital technologies, where regulation is part of the design of technical systems rather than an extraneous compliance requirement. Rather than relying only on regulatory instruments that may stifle innovation, the focus is on empowering a wide range of nations and stakeholders with the technical capabilities needed to govern AI effectively. India has implemented techno-legal approaches in data governance, animated by its digital public infrastructure (“India Stack”) as well as the Data Empowerment and Protection Architecture, which proposed the concept of “consent manager” institutions that put individuals at the center of data access and control decision flows. 

With its framing, New Delhi is positioning itself as an arbiter of a very specific model of AI-driven growth, where governments are co-creators and not just buyers and regulators of AI. This is distinct from, for example, the United States’ techno-nationalist approach, which is driven by a handful of massive AI companies. New Delhi’s hybrid system prioritizes narrow, tailored government interventions in sectors that have the deepest scope for impact and inclusion, such as healthcare, agriculture, and education. The marquee initiative of the summit is the Global Impact Challenge, which encourages AI applications for climate, financial inclusion, health, urban infrastructure, agritech, and more.

In this vein, expect the launch of India’s sovereign foundation models, reportedly trained entirely on homegrown datasets and hosted on Indian cloud infrastructure. One such model is being built by BharatGen, a Department of Science and Technology initiative, supported by strategic collaborations with Indian research institutions and partners such as IBM.

The safety imperative

While large amounts of capital and political will are focused on one kind of AI race—capabilities and infrastructure—there is another race that must receive the same attention.

The International AI Safety Report, led by Canadian computer scientist Yoshua Bengio, launched just before the Paris AI Action Summit. The first update to the report was published in October, and among its findings is this alarming tidbit: “Some research shows that AI systems may be able to detect when they are in an evaluation setting and alter their behavior accordingly.” In other words, AI systems may know when they are being evaluated and may produce outputs tailored to the evaluation. This is a function of core AI behaviors such as goal preservation (maintaining core objectives) and self-preservation (not wanting to be shut down or replaced). If current AI models can deceive human evaluators, the danger is that more sophisticated, potentially harmful models may be able to slip past national AI safety testing regimes. 

“Safe and Trusted AI” is one of the chakras for the summit, but while the summit treats it as one distinct theme, AI safety should not be thought of as optional. Rather, it is essential to the achievement of the other chakras.

Notably absent so far from the agenda is the IndiaAI Safety Institute (IAISI). Launched in March 2024, the IAISI follows a virtual hub-and-spokes model, with different IAISI cells carrying out specific mandates. That said, there are likely to be some demonstrations of the thirteen AI safety projects that IndiaAI supports under its Safe & Trusted AI pillar. Among these is a unique contribution to a subfield of AI safety called “machine unlearning” by Indian Institute of Technology, Jodhpur. This approach involves making a machine learning system forget a piece of incorrect, corrupted, or harmful training data without fine-tuning or retraining the entire model. 

Despite the headline attention on impact, safety needs to be fundamental to India’s vision for AI that engenders trust, inclusion, and empowerment. Take a hypothetical AI use case for agricultural advisory. The intended goal of a system would be to empower smallholder farmers with predictive tools to help with crop management in areas such as pest control and crop choice based on expected weather patterns. AI systems trained for average accuracy would fail in outlier or extreme cases. The objective function (or goal) of such a system may be to minimize error, not to minimize harm under uncertain conditions. In other words, in the world of the smallholder farmer, a confidently wrong forecast could cause more serious, even catastrophic, harm than a tentatively right one.

The messaging from India about the AI Impact Summit is compelling: AI must be safe, empowering, and trustworthy. New Delhi appears to be taking a people-centered approach, emphasizing use cases that have the greatest scope for positive impact for the widest swath of the population. This approach will resonate with established, emerging, and aspiring AI powers alike. However, without embedding AI safety as a design principle, New Delhi risks repeating a familiar pattern: developing technologies that orbit policy ambitions but never fully land in people’s lived experience.


Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

The post Safety should be front and center in India’s vision for its AI Impact Summit appeared first on Atlantic Council.

]]>
Digging into the details of the US-Saudi deals https://www.atlanticcouncil.org/content-series/fastthinking/digging-into-the-details-of-the-us-saudi-deals/ Wed, 19 Nov 2025 18:14:53 +0000 https://www.atlanticcouncil.org/?p=889248 Our experts dive into the US-Saudi announcements that followed Saudi Crown Prince Mohammed bin Salman’s White House visit on Tuesday.

The post Digging into the details of the US-Saudi deals appeared first on Atlantic Council.

]]>

GET UP TO SPEED

“We’ve always been on the same side of every issue.” That’s how US President Donald Trump described Saudi Crown Prince Mohammed bin Salman (MBS) during a chummy Oval Office meeting on Tuesday, part of a day of pageantry and dealmaking at the White House. The United States and Saudi Arabia struck a series of agreements on defense, semiconductors, nuclear power, and more. While the world awaits the fine print of these deals, our experts took stock of what the leaders have announced so far and what to expect next. 

TODAY’S EXPERT REACTION BROUGHT TO YOU BY

  • Daniel B. Shapiro (@DanielBShapiro): Distinguished fellow at the Scowcroft Middle East Security Initiative and former deputy assistant secretary of defense for the Middle East and US ambassador to Israel
  • Tressa Guenov: Director for programs and operations and senior fellow at the Scowcroft Center for Strategy and Security, and former US principal deputy assistant secretary of defense for international security affairs 
  • Jennifer Gordon: Director of the Nuclear Energy Policy Initiative and the Daniel B. Poneman chair for nuclear energy policy at the Global Energy Center
  • Tess deBlanc-Knowles: Senior director with the Atlantic Council Technology Programs and former senior policy advisor on artificial intelligence at the White House Office of Science and Technology Policy

Jet setters

  • On defense, Trump approved the sale of fifth-generation F-35 fighter jets to Saudi Arabia, which Dan interprets as an indication that the US president “is going all-in on the US-Saudi relationship.” 
  • But “China remains an issue in the backdrop of US-Saudi defense relations,” Tressa tells us. She notes that US intelligence agencies have reportedly raised concerns about Chinese access to the F-35 if a US-Saudi sale were to proceed, and “similar efforts to sell F-35s to the UAE were not realized across the previous Trump and Biden administrations, in part due to concerns of technology transfer to China.” 
  • There’s also the US legal requirement to ensure Israel’s qualitative military edge (QME) in the region. Dan points out that although the 2020 F-35 deal with the United Arab Emirates was later scuttled, it did pass a QME review, and the Saudi deal is likely to do so as well, in part because “Israel will have been flying the F-35 for a decade and a half before the first Saudi plane is delivered, and Israel will have nearly seventy-five F-35s by then.” 
  • But the UAE deal was linked to its normalization of diplomatic relations with Israel, and “it appears there is no link to Saudi normalization” with Israel in this deal, Dan points out. In the Oval Office, MBS conditioned his joining the Abraham Accords on “a clear path” to a Palestinian state, which does signal a potential disparity from Saudi Arabia’s previous stance requiring the “establishment” of a Palestinian state.
  • The Biden administration held talks with Saudi Arabia about a treaty that “would have included restrictions on Saudi military cooperation with China and ensured access for US forces to Saudi territory when needed to defend the United States,” Dan tells us. But “Trump has not announced whether he is giving the Saudis a one-way security guarantee, or whether there are mutual-security commitments.” 
  • So what about Trump’s announcement during MBS’s visit that Saudi Arabia has become the United States’ twentieth Major Non-NATO Ally? Tressa tells us the designation “is a favorite tool of US presidents to cap off major visits with a symbolic flourish to indicate elevated relations.” But Saudi Arabia already enjoys many of the benefits of the designation, Tressa notes, such as privileged access to US arms sales, and the designation “does not provide any special or enforceable security guarantees, nor is it a binding treaty.” 

Sign up to receive rapid insight in your inbox from Atlantic Council experts on global events as they unfold.

Nuclear option

  • The White House also announced a Joint Declaration on the Completion of Negotiations on Civil Nuclear Energy Cooperation. Jennifer tells us it’s “likely a precursor to an official Section 123 agreement” on peaceful nuclear cooperation, which must also be reviewed by Congress. 
  • “Saudi Arabia has indicated keen interest for years in pursuing civil nuclear technologies,” Jennifer notes, both to add to its power grid and for water desalinization. If the United States provides that nuclear technology, she adds, then “it can exert influence on security matters and help prevent the development of nuclear weapons in Saudi Arabia and beyond.”  
  • “Although there had long been speculation that a civil nuclear agreement between the US and Saudi Arabia might cover broader geopolitical issues,” Jennifer adds, “this week’s announcement reflects a more pragmatic approach with a focus on technologies that have strong national security implications.” 

Chipping in

  • The two leaders also announced an AI Memorandum of Understanding but did not release many details. “Likely this means the approval of the sale of a package of advanced AI chips to Saudi Arabia,” Tess says. In the Oval Office, she points out, “MBS shared his vision (and strategic bet) on computing to compensate for the country’s workforce shortfalls and ensure continued economic growth.” 
  • While the Trump administration has lifted the Biden administration’s “AI Diffusion Rule” that limited the sale of chips to many countries, it still has the final say on exports of the most advanced chips to Saudi Arabia, Tess notes, “likely due to fears related to ties with China.” 
  • Now, Tess adds, US national security officials will keep their eyes on “the provisions of the new AI agreement focused on technology protection and what measures will be put in place to keep America’s most advanced AI chips out of reach of Chinese adversaries.” 

The post Digging into the details of the US-Saudi deals appeared first on Atlantic Council.

]]>
It’s time to reckon with the geopolitics of artificial intelligence https://www.atlanticcouncil.org/content-series/inflection-points/its-time-to-reckon-with-the-geopolitics-of-artificial-intelligence/ Tue, 11 Nov 2025 12:57:47 +0000 https://www.atlanticcouncil.org/?p=887414 The world has entered the most consequential tech race since the dawn of the nuclear age, but this time the weapons are algorithms instead of atoms.

The post It’s time to reckon with the geopolitics of artificial intelligence appeared first on Atlantic Council.

]]>
The headlines from Donald Trump’s recent meeting with Xi Jinping were all about the US and Chinese presidents reaching a trade truce. But what was lost in the news is a far more significant matter that will shape the high-stakes competition unfolding between the world’s two most significant powers: the contest for the commanding heights of artificial intelligence (AI).

The world has entered the most consequential tech race since the dawn of the nuclear age, but this time the weapons are algorithms instead of atoms. Rather than a race to obtain a single superweapon, this is one to determine how societies think, work, and make decisions. AI is transforming not only the distribution of power around the globe but also the very nature of that power and how it will be exercised.

A race with generational consequences

The Chinese government sees AI as a crucial driver for what it calls “comprehensive national power.” That’s why it is so focused on the rapid integration of AI into surveillance, consumer products and services, advanced manufacturing, military modernization, and even scientific discovery under a unified state strategy. As Tess deBlanc-Knowles, senior director with Atlantic Council Technology Programs, tells me, “One of the notable aspects of China’s approach is the prioritization of application, or what is called ‘AI-plus.’ China has an advantage over the US in terms of providing direction and incentives for the integration of AI across all sectors of the economy.”

When it comes to AI development and deployment, China’s private sector must be subservient to the will of the Communist Party. The cycle of innovation that results is distinct from Western conceptions of more loosely connected relationships among policymakers, industry, and academia. 

The United States, by contrast, relies much more heavily on the singular dynamism of its private sector, open research culture, and international alliances. The US government struggles to coordinate its private stakeholders and universities at any national scale. The country remains hamstrung by weakening legal protections for privacy and intellectual property that tend to introduce ambiguity rather than clear running lanes. 

And run the United States must. Failure to maintain US leadership on AI could have generational consequences. The outcome of this contest will determine which values—authoritarian efficiency or democratic dynamism—set global norms on everything from digital commerce to autonomous warfare.

“The escalating AI race is drawing comparisons with the Cold War, and the great scientific and technological clashes that characterized it,” write Josh Chin and Raffaele Huang in the Wall Street Journal today. “It is likely to be at least as consequential.” They write that both China and the United States “are driven as much by fear as by hope of progress.”

Helping the US and its allies mobilize, iterate, and deliver

There’s little doubt that who wins this race will depend on who can produce the most advanced chips, the best models, the most potent computers, and the cheapest and most sustainable energy for a proliferation of purposes. 

More significantly, the emerging AI contest is about defining the world’s future standards in areas such as freedom, privacy, and even human dignity. The design of the internet—its core protocols and standards—reflected a bias toward openness, self-organization, and free speech that have shaped two generations of lives online and trillions of dollars in consumer technology. This moment in the AI era offers the same pivotal opportunity for influence. If the United States and its allies lose this race, that could produce a world in which AI becomes more of an instrument for political and autocratic control than one for individual and democratic empowerment.

With so much at stake, the Atlantic Council last week launched its GeoTech Commission on Artificial Intelligence as our flagship initiative to address this historic moment. It will bring together congressional leaders, top industry executives, and innovators across the AI ecosystem to ensure that the United States maintains its technological preeminence in an AI-defined world. Our aim is to help the United States and its allies mobilize more stakeholders, iterate faster, and deliver actionable strategies to ensure US and allied leadership—and a more enlightened, prosperous, secure, and democratic future.

The GeoTech Commission, of which I’m a member, will focus on overall competitiveness across six critical realms: AI innovation, supply chains, energy sources, government adoption and oversight, talent development, and international alliances. Rather than prioritizing some of these realms over others, it will integrate these pieces to address what asserting US leadership and winning the AI race should look like. The race for AI doesn’t boil down to one single measure or factor. 

Los Alamos this isn’t

I began by writing that the current tech race is the most consequential for humanity since the beginning of the nuclear era. Some have gone further, drawing a direct comparison between the race for AI preeminence and the Manhattan Project that produced the first nuclear weapon. What’s true is that the AI race, like the Manhattan Project before it, will be decided to some extent by scientific breakthroughs. Both also share the potential for great good and catastrophic harm.

Yet this is also a misleading analogy. The Manhattan Project was a clandestine, centralized, US government-led sprint at a time of world war. The US government did have an important role in enabling the AI revolution through the development of technical foundations for deep learning and other advancements. But it has been private industry, not the government, that has leveraged and innovated to get to today’s capabilities. 

To win this race, governments know they must work effectively with private companies such as Anthropic, Google, Nvidia, Microsoft, and OpenAI in the United States and Alibaba, DJI, High-Flyer, and Huawei in China. Such companies wield budgets and global reach that would make most defense ministries blush.

‘China is going to win the AI race’

The American edge is in its democratic, free market, innovative ecosystem, which at its best is an unmatched magnet for talent and capital. Yet that ecosystem is also a vulnerability in that Washington can’t control or leverage its tech champions for any overriding national security purpose in the manner Beijing does routinely.

“China is going to win the AI race,” Nvidia CEO Jensen Huang told the Financial Times this past week, pointing to Beijing’s looser regulations, new energy subsidies, and direct intervention to assist its champions. Industry leaders worry that the Trump administration focuses more on restricting what US firms can sell to China than on energetically helping its companies win the race. “We need more optimism,” Huang said a week after Trump announced that he would stop China from gaining access to both Nvidia’s cutting-edge Blackwell chips and a less advanced chip designed explicitly for the Chinese market, and just a few days after the company reached an unprecedented market capitalization of five trillion dollars.

China’s system fuses state and private ambition in a manner that could be decisive, mobilizing government, private capital, and leading-edge science around common cause dictated by Xi and the Communist Party. The system intentionally aligns national goals with corporate incentives. While US companies focus on winning markets, competing with each other, and turning profits, Chinese companies that fail to serve the state and the party do so at their own peril. 

In the United States, by contrast, the messiness of the free market could prove an enduring strength in directing capital, talent, and attention to cutting-edge technologies. Winning the race to adopt AI will require newly integrated thinking across the development, use, and consequences of the technology, rather than a narrow focus on how to build more chips or run faster models.

The Atlantic Council’s GeoTech Commission on Artificial Intelligence will grapple with this integrated question and identify how best to counter China’s capacity to leverage its entire society toward technological ends. The Manhattan Project changed history with an explosion. The demonstrations of success won’t be as dramatic with AI, but they will affect every person on the globe. And the outcome may be just as far-reaching in determining what group of countries and which set of values determine the future.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on X @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points newsletter, a column of dispatches from a world in transition. To receive this newsletter throughout the week, sign up here.

The GeoTech Commission on Artificial Intelligence

Enabling US and allied leadership in the age of AI

The post It’s time to reckon with the geopolitics of artificial intelligence appeared first on Atlantic Council.

]]>
Atlantic Council launches GeoTech Commission on Artificial Intelligence  https://www.atlanticcouncil.org/news/press-releases/atlantic-council-launches-geotech-commission-on-artificial-intelligence/ Wed, 05 Nov 2025 14:03:20 +0000 https://www.atlanticcouncil.org/?p=885781 WASHINGTON, DC – NOVEMBER 5, 2025 – The Atlantic Council today announced the launch of the GeoTech Commission on Artificial Intelligence, the Council’s flagship initiative to shape the global AI agenda. The Commission brings together bipartisan Congressional leaders, top industry executives, and innovators across the AI ecosystem to ensure the United States maintain its leadership […]

The post Atlantic Council launches GeoTech Commission on Artificial Intelligence  appeared first on Atlantic Council.

]]>

WASHINGTON, DC – NOVEMBER 5, 2025 – The Atlantic Council today announced the launch of the GeoTech Commission on Artificial Intelligence, the Council’s flagship initiative to shape the global AI agenda. The Commission brings together bipartisan Congressional leaders, top industry executives, and innovators across the AI ecosystem to ensure the United States maintain its leadership in a world increasingly defined by AI. 

The world is experiencing a historic surge in AI development and deployment, defined by three trends: intensifying geopolitical competition, deepening interdependence among various actors, and accelerating technological disruption. “To meet this moment, the Commission’s mission is clear,” said Fred Kempe, President and CEO of the Atlantic Council. “We need to mobilize more stakeholders, iterate faster, and deliver actionable strategies to secure US leadership in an increasingly complex and interconnected global AI ecosystem.”  

The Commission will focus on overall US competitiveness across six critical areas: innovation, supply chains, energy, government adoption and oversight, talent development, and international alliances. It will convene regularly, host high-profile public forums, and serve as a trusted platform for leaders to craft practical solutions that strengthen US and allied positions in the global AI race.  

“Artificial intelligence is reshaping every dimension of US competitiveness—from defense readiness and national security to economic strength. American leadership in AI requires bold, coordinated action across government, industry, and our allies,” said Ron Ash, co-chair of the GeoTech Commission. “That is why I’m honored to co-chair the GeoTech Commission on AI, collaborating with leaders from established technology powerhouses and emerging innovators working to advance actionable strategies that ensure our AI future is trusted, resilient, and at the forefront of the global AI revolution.”   

The new Commission builds on the Atlantic Council’s pioneering work on the geopolitics of artificial intelligence. It is the second iteration of the GeoTech Commission; the first ran from 2021 to 2023 with a focus on US technology leadership, supply chain resilience, and global health security. 

“We can realize the full benefits of AI only when we forge new alliances—across borders, industries, and sectors—that are nimble, transparent, and grounded in our shared democratic values,” said Kemba Walden, co-chair of the GeoTech Commission. “I’m grateful that the Atlantic Council’s GeoTech Commission offers a platform to foster these intimately collaborative relationships that are the cornerstone of U.S. innovation.”  

Led by the Atlantic Council’s Technology Programs (ACTech), the Commission will combine cutting-edge technical research with deep geopolitical expertise. It will deliver evidence-based insights, convene key stakeholders, and drive actionable strategies to shape the future of AI governance and innovation as drivers of the American economy of the future.  

Hear from the Commission’s Honorary Congressional Co-Chairs: 

“The global AI revolution is already rewriting how nations compete, how economies function, and how people work. Our job with the GeoTech Commission is simple: bring scientists, innovators, and policymakers together to actively shape an AI future that serves humanity,” said Senator John Hickenlooper, Honorary Congressional Co-Chair of the GeoTech Commission. 

“I am excited to be named as an Honorary Co-Chair of the GeoTech Commission on Artificial Intelligence and look forward to the group’s efforts to ensure the U.S. leads in this critical technology. American leadership in AI innovation, development, and deployment is essential to our economic and national security and ultimately ensuring we beat China in the global technological race,” said Senator Todd Young, Honorary Congressional Co-Chair of the GeoTech Commission. 

“The United States is a global leader on artificial intelligence, and this Commission is designed to inform the policies required to lead into the future. If we are to set the long-term direction for AI that reflects our core American values, then we must have a seat at the international table,” said Rep. Suzan DelBene, Honorary Congressional Co-Chair of the GeoTech Commission. 

“Artificial intelligence is reshaping our economy and the global balance of power. The United States must continue to lead by advancing innovation that reflects our values and strengthens our competitiveness,” said Rep. Jay Obernolte, Honorary Congressional Co-Chair of the GeoTech Commission. 

Honorary Congressional Co-Chairs:  

  • Sen. John Hickenlooper (D-CO), US Senate
  • Sen. Todd Young (R-IN), US Senate
  • Rep. Suzan DelBene (D-WA, 1), US House of Representatives
  • Rep. Jay Obernolte (R-CA, 23), US House of Representatives

Commission Leadership: 
Commission Co-Chairs:

  • Ron Ash, CEO, Accenture Federal Services
  • Kemba Walden, President, Paladin Global Institute; Board Member, Atlantic Council

Commissioners: 

  • Frederick Kempe, President, Atlantic Council
  • Sridhar Ramaswamy, CEO, Snowflake
  • Dave Levy, Vice President for Worldwide Public Sector, Amazon Web Services
  • Thomas Zacharia, Senior Vice President of Strategic Technology Partnerships and Public Policy, AMD
  • Ned Finkle, Vice President of External Affairs, NVIDIA
  • Sarah Heck, Head of External Affairs, Anthropic
  • Nabida Syed, Executive Director, Mozilla Foundation
  • Don Vieira, Partner & Chair of the Tech Policy Practice, Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates
  • Brie Sachse, Senior Vice President & Head of U.S. Government Affairs, Siemens USA
  • John Goodman, Board Member, Atlantic Council
  • Rachel Gillum, Vice President, Ethical and Humane Use of Technology, Salesforce
  • Tyson Lamoreaux, Senior Vice President of Cloud/AI, Arista Networks
  • Nathan Jokel, Senior Vice President, Corporate Strategy & Alliances, Cisco
  • Markham Erickson, Vice President, Government Affairs & Public Policy Centers of Excellence, Google
  • Amanda Craig Deckard, General Manager, Office of Responsible AI, Microsoft
  • Matthew Graviss, Chief Technology Officer for Public Sector, Atlassian
  • Chris Massey, Founder, The Brds Nst; Senior Fellow, Foundation for American Innovation
  • Molly Montgomery, Director of Public Policy, Meta

Executive Director 

  • Graham Brookie, Vice President, Technology Programs, Atlantic Council 

For media inquiries, please contact press@atlanticcouncil.org

The post Atlantic Council launches GeoTech Commission on Artificial Intelligence  appeared first on Atlantic Council.

]]>
Data centers aren’t grid villains—they’re allies https://www.atlanticcouncil.org/blogs/energysource/data-centers-arent-grid-villains-theyre-allies/ Wed, 22 Oct 2025 20:06:35 +0000 https://www.atlanticcouncil.org/?p=882547 Contrary to the perception that AI data centers are only adding strain to the US grid, the facilities are in a position to help address issues facing the electricity system.

The post Data centers aren’t grid villains—they’re allies appeared first on Atlantic Council.

]]>
As residential electricity rates tick up, artificial intelligence (AI) data centers are increasingly being painted as villains. In Virginia, home to the world’s largest concentration of data centers, the leading candidate for governor has argued the industry is not paying its “fair share” of electricity costs.

If this public perception hardens into conventional wisdom, data centers could find themselves in a losing battle with residential customers for a scarce resource. But contrary to this perception, data centers could be a major part of the solution to the problems of rising demand, insufficient generation, and inefficient demand management faced by electricity grids.

Welcome to the neighborhood

AI data centers are positioned to become core parts of regional grids as they increasingly rely on large co-located generation assets. The more AI data centers can avoid competing with residential ratepayers as innovation catches up with demand, the better. For example, numerous new data centers are planned in Texas that intend to supply all of their own electricity from co-located natural gas turbines. These generation assets will likely produce more power than their associated data centers consume, and could flex that supply to the grid, to the benefit of local consumers if they connect in the future. Data centers are also likely to be large-scale customers for clean energy technologies like small modular reactors (SMRs) and battery energy storage system (BESS) installations, providing market demand regardless of changing subsidy regimes. 

Data centers, however, do not necessarily need to provide 100 percent of their own power to mitigate stress on the grid. Those with partial, co-located backup power can seriously reduce systemwide demand spikes by flexing down their demand on the grid by relatively small amounts.

Facilitating co-located generation

That said, to avoid long interconnection queues and time-consuming regulatory requirements for connecting to the grid, some data centers will prefer to go it alone, remaining off the grid and powering themselves exclusively with co-located generation. New Hampshire has gone so far as to simply exempt power users from grid permitting requirements if they remain off grid and to investigate withdrawal from the regional grid Independent System Operator “ISO New England.” Ideally, they would connect to the grid in the future to provide additional generation capacity and demand flexibility, so clear interconnection requirements even in the absence of required permits would help facilitate connections in the future. Of course, even power plants not connected to a grid must meet safety and construction standards, but they are spared the grid standards needed to ensure the entire grid remains balanced and adequately supplied. 

Facilitating the rapid construction of new generation capacity by private companies has the added benefit of avoiding stranded asset risk to utilities. If demand fails to materialize and generation infrastructure is overbuilt, the private companies that built it will be on the hook rather than utilities and their ratepayers.

Time-of-use pricing

In addition to co-located generation, pricing mechanisms that reflect real-time electricity use offer another avenue for supporting grid stability. Time-of-use (TOU) pricing—charging different rates for electricity depending on demand—is particularly well suited for data centers that have the capacity to flex grid demand. Implementing TOU pricing for these data centers would encourage them not only to flex their demands on the grid, but also to shift that demand geographically. Building or leasing additional fiber capacity can be a cost- and time-effective alternative to laying additional transmission lines for data centers. A stronger price signal could encourage firms to use these fiber-optic cables to shift lower priority workloads to other data centers where electricity is cheaper.

AI could also make TOU pricing clear and simple for residential consumers to lower their bills. Residential TOU exists but is not widely implemented. It tends either to fail to incentivize consumers to shift their electricity use, or incentivize and create new demand peaks at the lowest-priced use times. AI could automate the system, smoothing the demand curve without forcing consumers to make complex calculations or shift all their affected use to a specific new time. It could allow residential consumers to determine how much of a trade-off between cost and convenience they are willing to accept and set their smart meter accordingly. A greater tolerance for reducing heating, air cooling, and other electricity use would result in lower bills. For example, a budget-minded consumer who set a preference to “lowest cost” would likely notice household temperature fluctuations as the system responded to real-time demand spikes. A less price-conscious consumer might allow only modest energy reductions, or none at all. Such a system could make TOU easy and intuitive to consumers while responding to real-time prices rather than average demand cycles.

TOU has already demonstrated the ability to shift demand from peak to off-peak times by several percentage points even when poorly implemented. In a state like New York, where peak demand reaches 34,000 megawatts (MW), shifting even 5 percent of peak demand to off-peak times would exceed the entire capacity of a brand new transmission line. The systemwide efficiencies gained from effective TOU pricing facilitated by AI could drive down peak electricity rates for both data centers and residential consumers. TOU pricing is especially effective at minimizing rates because it lowers the capacity clearing price paid and reserve margin of generation supply maintained by utilities to manage demand spikes.

Value-based pricing

Another systemwide reform that could help prevent a potential electricity consumer backlash against data centers is 24/7 value-based pricing. It would speed the addition of generation capacity to grids for use by both data centers and residential consumers. Pricing power by its value to the grid rather than its cost to produce makes it easier for grid operators to integrate new generation capacity. Solar and wind have a near-zero marginal cost of production, but integrating large volumes of intermittent power complicates grid balancing and threatens the commercial viability of much-needed dispatchable generators. 

Requiring intermittent producers to price in the cost of battery back-up would create a competition on total value rather than marginal cost of production. This would prevent intermittent generation from bankrupting dispatchable power plants without being able to replace their generation capacity when the wind doesn’t blow or the sun doesn’t shine. These dispatchable producers rely on selling power consistently, not just when solar and wind are inactive. Value-based pricing would encourage investment in clean technologies like SMRs, BESS, and geothermal while easing pressure on dispatchable power producers that are key to balancing the grid. 

Industry needs to drive reform

Both political parties have incentives to reform grid regulation, as it is needed to support the AI industry specifically and US industry generally, as well as to increase the use of clean power. Despite that, regulatory reform has consistently proved elusive. Large AI providers as well as data center operators are the only market players with enough clout to make reform happen.

Failure to push reforms through risks an electricity shortage and consumer backlash that deprives the AI industry of the energy that is essential for its growth. If the firms at the forefront of the AI revolution want to continue to innovate, they need to win friends and allies within their shared energy system.

Nate Mason is an energy advisor and government relations expert with twenty years of experience that includes service in the US Departments of State, Energy, and Commerce, as well as the US Embassies in Kyiv and Tripoli.

stay connected

Sign up for PowerPlay, the Atlantic Council’s bimonthly newsletter keeping you up to date on all facets of the energy transition

related content

our work

The Global Energy Center develops and promotes pragmatic and nonpartisan policy solutions designed to advance global energy security, enhance economic opportunity, and accelerate pathways to net-zero emissions.

The post Data centers aren’t grid villains—they’re allies appeared first on Atlantic Council.

]]>
For aging populations to benefit from advances in healthcare technology, countries must promote digital health literacy https://www.atlanticcouncil.org/blogs/geotech-cues/for-aging-populations-to-benefit-from-advances-in-healthcare-technology-countries-must-promote-digital-health-literacy/ Tue, 21 Oct 2025 18:58:26 +0000 https://www.atlanticcouncil.org/?p=882301 As world leaders gather for the World Social Summit in Doha, empowering older adults with digital and AI literacy emerges as a critical priority for advancing social inclusion, health equity, and global digital transformation.

The post For aging populations to benefit from advances in healthcare technology, countries must promote digital health literacy appeared first on Atlantic Council.

]]>
In November, leaders will gather for the Second World Summit for Social Development (World Social Summit) in Doha, Qatar. This forum provides an opportunity for governments, development officials, and healthcare leaders across the world to determine how to deploy artificial intelligence (AI) and digital technologies to promote societal inclusion and personal health and wellbeing.

Unfortunately, when it comes to human talent, AI or digital adoption action plans—be they national or multilateral—tend to focus on reskilling for younger populations. The importance of digital reskilling for older populations to empower their productivity, health, and social welfare should be a strategic priority, as well. Attention to this population segment is increasingly paramount considering that people aged sixty-five and older compose the fastest-growing demographic group in the world, especially in low-and-middle-income countries.

It is encouraging that the World Social Summit’s Doha Political Declaration, which will be officially adopted at the Summit, acknowledges the importance of digital and social inclusion encompassing older populations. But policymakers should also incorporate adequate training and trust frameworks for reskilling aging populations into their infrastructure development goals. Countries are making considerable investments to ramp up their digital infrastructure. If these efforts are not paired with a reskilling capacity, leaders risk excluding a growing older adult population from full societal and economic participation. How effectively the summit addresses this issue will help determine countries’ preparedness for major forthcoming technological and demographic shifts.

Digital literacy for older populations: A super-determinant of social development

For older adult populations to benefit from new applications of AI and digital technologies in healthcare, digital and AI literacy is essential. Research indicates that AI healthcare tools could potentially improve the detection and diagnosis of chronic diseases and help medical professionals make swifter clinical decisions.

While global life expectancy has increased over the decades, individual healthspan (or the number of years lived disease-free) has lagged behind life expectancy, with a gap of 9.6 years. A major driver of this gap is the pervasiveness of noncommunicable diseases (NCDs), including Alzheimer’s, dementia, cancer, and heart disease, which are most prevalent in populations older than fifty. The capacity of an individual to use technologies to manage their own health, referred to as digital health literacy, is characterized as a super-determinant of health. Digital health literacy may play a role in extending both life expectancy and healthspan related to NCDs management, particularly among older populations.

The triple barrier: Challenges to digital health adoption

Policymakers must grapple with three interlocking barriers that make it difficult to engage older populations with digital tools: insufficient infrastructure, low trust, and inadequate design.

Globally, the digital divide remains stark. In developed economies, 90 percent of people have internet access, while only 27 percent of those living in developing economies do. This gap is exacerbated for aging populations ages sixty and over, who are disproportionately offline compared to their more connected younger counterparts. Digital health literacy is a prerequisite for a population to benefit from AI-driven healthcare. The absence of this literacy can cause severe complications, including delayed diagnoses, poor adherence to treatment plans, and patient absenteeism.

Another barrier to the adoption of digital health services is trust. Older adults often view digital healthcare with skepticism, fearing data breaches, unclear terms of use, or inadequate quality control. In the United States, 60 percent of patients that consider using a health app decide not to over privacy concerns. Overcoming this lack of trust requires transparent communication, the reinforcement of safety protocols, and endorsements from trusted authorities.

Even with digital connectivity and training, digital health tools will not be widely adopted if they are poorly designed. User experience research shows that technical jargon, cognitive overload, impersonal interfaces, and mismatched engagement methods reduce uptake. By contrast, personalization—such as tailoring messages to a patient’s context and communication preferences—has been shown to significantly increase adherence to preventive behaviors.

Lessons from national initiatives to increase digital health literacy

Here are four approaches policymakers and civil society actors at the World Social Summit can look to when implementing the commitments to digital inclusion outlined in the Doha Political Declaration:

  • Promote the rollout of national digital health literacy programs for older adults. Such programs can help older adult populations access the benefits of digital health tools. India’s Understanding of Lifelong Learning for All in Society, a government-sponsored literacy program for citizens aged fifteen and above who missed the opportunity to attend school, is one example of such an initiative. Through virtual modules and volunteer support, citizens are trained in general skills, including digital and health literacy. This program could serve as a useful model for the creation of more targeted literacy programs focused on providing older adults with digital health literacy skills they may not be able to learn elsewhere.
  • Encourage local governments to tie digital skills training to digital infrastructure investments. This approach can help make the most of technology deployment by combining it with community-based engagement projects. With a bottom-up approach, in Kebbi State, Nigeria, the Medicaid Cancer Foundation-Patience Access to Cancer Care, began increasing awareness about the importance of early detection and prevention of cancer with community leaders and a peer-to-peer outreach model. Once this network of trust was established, the program was able to effectively strengthen patient management through the digitalization of follow-up care and the establishment of a State Cancer Registry to systematically track cases.
  • Promote national programs that train community leaders to be digital skills educators. Trusted community leaders can help overcome negative perceptions of digital health tools. In the United Kingdom, the National Health Service’s BP@Home program trained community health workers to empower patients with home blood pressure management. BP@Home has reached over 220,000 participants since 2020. Using a step-by-step approach with phone calls, leaflets, and a dedicated app, this model ensures that patients, especially older adults, not only have the technology but also the skills and confidence to manage their blood pressure.
  • Incorporate user perspectives in the elaboration of AI skilling policies. Building trust in AI technologies demands multisector collaboration with older adults as the end users. A transdisciplinary trust framework can help bridge these perspectives, linking scientific insights on ethics and reliability with the experiences and concerns of older populations. By embedding trust-building into digital health strategies, such frameworks can ensure that AI tools are not only technically sound but socially legitimate, culturally sensitive, and aligned with the values of their users. This approach is especially vital in lower- and middle-income countries, where skepticism, lower digital literacy rates, and infrastructural gaps intersect most acutely.

***

Policymakers at the World Social Summit should commit to skilling aging populations—from infrastructure investment to user design, from trust-building to training—to achieve sustainable and resilient social protection systems. The action plans of today will shape the health equity landscape of tomorrow. If leaders fail to act, the digital and health divides will grow. If they act decisively, advances in AI and digital health technologies could become powerful equalizers in global health for decades to come.


Vijeth Iyengar is a nonresident senior fellow at the Atlantic Council’s GeoTech Center. The views reflected in the article are the author’s views and do not necessarily reflect the views of his employer.

Zainab Shinkafi-Bagudu is a senior advisor at the Federal Ministry of Health Nigeria and president-elect of the Union for International Cancer Control.

Héctor Pourtalé is a global public health consultant and former executive director of Movement Health Foundation.

Frank Krueger is a professor at the School of Systems Biology, George Mason University and honorary professor at the University of Mannheim.

Further reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post For aging populations to benefit from advances in healthcare technology, countries must promote digital health literacy appeared first on Atlantic Council.

]]>
Ackerman defends human oversight of battlefield decisions on Open Debate https://www.atlanticcouncil.org/insight-impact/in-the-news/ackerman-defends-human-oversight-of-battlefield-decisions-on-open-debate/ Wed, 15 Oct 2025 20:13:12 +0000 https://www.atlanticcouncil.org/?p=881283 On October 3, Forward Defense nonresident senior fellow Elliot Ackerman was featured in an episode of Open Debate entitled "Wartime Kill Switch: Human or AI?" in which he defended human control over lethal battlefield decisions.  

The post Ackerman defends human oversight of battlefield decisions on Open Debate appeared first on Atlantic Council.

]]>

On October 3, Forward Defense nonresident senior fellow Elliot Ackerman was featured in an episode of Open Debate entitled “Wartime Kill Switch: Human or AI?” in which he defended human control over lethal battlefield decisions.  

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Ackerman defends human oversight of battlefield decisions on Open Debate appeared first on Atlantic Council.

]]>
What drives the divide in transatlantic AI strategy? https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/what-drives-the-divide-in-transatlantic-ai-strategy/ Mon, 29 Sep 2025 04:00:00 +0000 https://www.atlanticcouncil.org/?p=876649 The US and EU share AI ambitions but diverge on regulation, risking a fractured Western front. Nowhere is this tension sharper than in financial services, where details matter most.

The post What drives the divide in transatlantic AI strategy? appeared first on Atlantic Council.

]]>
As both the United States and European Union unveiled their respective AI strategies this summer, a paradox emerges: despite sharing broadly similar objectives—boosting domestic AI capabilities, maintaining technological leadership, and managing AI risks—the two allies find themselves increasingly at odds over how to achieve these goals. The divergence reflects fundamental differences in regulatory philosophy, economic structure, and geopolitical positioning — all of which threaten to fragment what should be a unified Western approach to AI governance at a critical moment of competition with China.

The Donald Trump administration’s “Winning the Race: America’s AI Action Plan” outlines a vision of AI as a decisive frontier of global economic and security competition. The first pillar advocates for a deregulated, private-sector-led environment by reducing regulations, promoting open-source AI models, and fast-tracking AI deployment in industries such as healthcare, while tackling some questions about workforce transition. The second pillar addresses energy capacity by upgrading the electric grid, restoring domestic semiconductor manufacturing, building secure data centers, and establishing cybersecurity measures including incident response capabilities. The third pillar on international diplomacy and security, seeks to counter Beijing’s growing influence in international governance bodies and export the full stack of US AI to allies and partners. The plan also identifies financial services as both an opportunity and a vulnerability. AI is viewed as a driver of financial innovation and efficiency, but also as a channel for risks including misinformation, cyber fraud, and systemic instability.

The European Commission’s AI Continent Action Plan was unveiled in April 2025, and is part of a long series of reports and regulations undertaken by the EU to bolster its competitiveness in AI. It lays out a five-pronged plan to scale up computation models through new AI factories, innovation hubs, and pooled resources, improve access to and availability of high-quality data, accelerate application of AI through public services and industrial activities, enable the Draghi report’s ambition to “exceed the US in education” when it comes to training and retaining skilled talent, and further fortify the European single market for AI.

Both the approaches aim to buttress domestic adoption and application of AI—often through nudges from the state when it comes to exploring applications in public services, and encouragement for many kinds of commercial activities. China has come to a similar conclusion, with its continual emphasis on using local government action plans to diffuse AI into public service provisions, and all kinds of industrial activities through its “AI Plus” initiative. There are few references to China in the EU’s latest AI document, while Washington’s approach has both implicit and explicit connotations of a largely two-way race between itself and Beijing.

Approaches from the United States and the EU are both likely to face issues regarding capital and financing of these action plans. While US private-sector investments in AI are many-fold those in the EU and China, the scale and focus of spending make a big difference. In the United States, the Trump administration has put AI contracts front and center in its broader deregulation approach—recent quarters have seen dozens of venture capital rounds above $100 million, and large megadeals (one of about $40 billion in the first quarter of 2025 alone aside) are becoming more common. Major players like Microsoft have committed to $80 billion this year for AI-capable data centers, and overall US tech capital expenditure for AI and infrastructure is being projected in the hundreds of billions over the next few years.

Meanwhile, across the EU, fiscal rules constrain deficit and debt levels: member states are required to keep deficits below 3 percent of gross domestic product (GDP) (though some exceed this threshold) and debt below 60 percent. The EU’s budget amounts to about 1 percent of GDP, and key instruments such as the Recovery and Resilience Facility are set to expire in 2026—leaving a gap in large-scale funding. The EU is currently negotiating its next seven-year budget (2028–2034), which is expected to place strong emphasis on large-scale investments, including a proposed Competitiveness Fund. In China, while growth targets remain and fiscal policy is being kept “flexible,” debt burdens, weak investment returns in sectors such as property and manufacturing, and slowing external demand limit what Beijing can unilaterally spend without risking macroeconomic instability.

These differences mean that even when headline figures like “$500 billion investments” are floated, much of that tends to flow into private capital for infrastructure, cloud and chip production, startup rounds, and acquisitions. They are not distributed evenly or necessarily aimed at building strategic domestic capabilities. Europe and China risk being unable to match the pace of US capital expenditure, not only because of absolute capital constraints but because of institutional, regulatory, and macro-fiscal drags.

Challenges to US-EU alignment on AI

These structural spending imbalances are compounded by inconsistent US policy decisions that leave European partners scrambling to adapt. For example, Joe Biden administration’s AI diffusion rule in January 2025 left many countries in Europe with restrictions on importing advanced chips from the United States, and led to a call for maintaining a “secure transatlantic supply chain on AI technology and super computers, for the benefit of our companies and citizens on both sides of the Atlantic.” The Trump administration repealed this rule and, in its place, the EU committed to purchasing $40 billion of US-made chips as a part of its trade agreement with the United States.

This interaction lays bare the two tensions complicating the US-EU alignment on AI strategies. The first concerns the strategies’ time horizons and the enabling actions undertaken by each jurisdiction. The EU’s approach has been solidified with years of iterative public discussion amid the market transformation from AI—starting with the Draghi report, the AI Act and even Ursula von der Leyen’s European Commission presidency campaign. In contrast, the US AI strategy has seemed reactive and temperamental—shifting focuses between administrations on important issues such as risk and safety, open-source models, and export controls. Recent partnerships with the Gulf states and lifting of controls on NVIDIA’s H20 chips sale to China have also demonstrated a deal-making approach to AI, which is often at odds with the stated US strategy.

The EU has embraced binding rules such as the AI Act, in line with its broader tradition of digital regulation. By contrast, US administrations have favored light-touch, voluntary frameworks, and sectoral oversight rather than comprehensive law. This reflects a bipartisan reluctance to over-regulate the industry. This divergence in regulatory culture means that even when Washington and Brussels agree on broad goals, they often diverge on the instruments used to achieve them,

The second tension in the US and EU strategies concerns the EU’s own complicated motivations in the context of its present economic interdependence on the United States and China. This reliance is visible across the entire AI input stack. At the software level, European firms overwhelmingly depend on US-developed foundational models, cloud platforms, and AI tools provided by companies such as Microsoft, Google, and OpenAI, reflecting the absence of a globally competitive European alternative. In 2025, the United States produced about forty large foundation models, China around fifteen, and the EU only about three. At the infrastructure and cloud level, the “big three” US cloud hyperscalers are estimated to power about 70 percent of European digital services. At the hardware level, the EU remains structurally reliant on advanced semiconductors designed in the United States and fabricated in Asia, with Europe’s domestic semiconductor sector making up less than 10 percent of global production. Supply chains for critical minerals and legacy chips further reinforce exposure to Chinese producers, which control a significant share of upstream inputs and mid-tier manufacturing. Chinese companies dominate the refining of critical minerals such as rare earths and graphite, essential for chipmaking and AI datacenter equipment. They are also leading suppliers of mid-range GPUs, networking hardware, and AI server components, which European firms may increasingly source to diversify away from US vendors. Chinese technology companies, including Baidu and Alibaba, are also emerging players in foundation model training and deployment, reinforcing Europe’s reliance on external providers. These dependencies complicate the EU’s sovereignty ambitions and its ability to balance relations with the United States.

Recognizing these vulnerabilities, the EU launched initiatives to expand domestic capacity, raising about €20 billion to build “AI gigafactories.” These factories would be capable of hosting large-scale compute infrastructure, with the aim of catching up to the US and China. While these projects signal a commitment to reduce dependency, they remain long-term efforts. Even as Europe invests in its own infrastructure, there is still high exposure to non-EU supply chains for the critical inputs into AI. The European Central Bank noted that about half of Euro area manufacturers sourcing critical inputs from China report being exposed to supply chain risk.

These two tensions—uncertainty in US policy actions and the gap between the EU’s ambitions of sovereignty and its reliance on US and China for critical inputs—will continue to play out over the next few years.

The financial services sector and AI action plans

For financial services in particular, AI adoption is accelerating—banks now flag AI as core to transformation. JPMorgan reports hundreds of production use cases across fraud, marketing, and risk in its shareholder communications, while Bank of America’s “Erica” virtual assistant has logged more than 2 billion client interactions—evidence that AI is reshaping front-, middle-, and back-office processes from customer service to underwriting to treasury operations. This brings opportunities including cost and error reduction, real-time risk sensing, and new AI-enabled products like cash flow intelligence for corporate treasurers.

But financial services also represent one of the highest-risk sectors for AI adoption, given the direct societal impact of errors or bias in lending, risk modeling, or compliance monitoring. The AI Index 2025 shows that measurable gains remain modest, with most firms reporting less than 10-percent cost savings or revenue growth below 10 percent. AI adoption for financial services also lags in key areas. Many institutions remain in pilot phases, data quality and legacy infrastructure limit deployment, and regulatory uncertainty combined with talent shortages slows uptake in high-risk applications such as credit scoring and underwriting. Regulatory divergence sharpens these trade-offs: The United States leans on voluntary risk-management tooling (the National Institute of Standards and Technology – Artificial Intelligence Risk Management Framework) that gives firms latitude to innovate, whereas the EU’s binding AI Act and sectoral guidance from the European Securities and Markets Authority impose high-risk classifications and board-level accountability for AI in investment services—raising documentation, testing, and oversight burdens for cross-border finance.

Ultimately, the private sector and business in both jurisdictions need to adapt to these tensions and, in some cases, even begin to view them as productive in their journey of AI adoption and diffusion across various functions. What the AI action plans have done is provide a broad framework of AI strategy. But for financial services companies and the broader commercial sector, the devil is in the details and will require closing the transatlantic gap in the regulatory approach to AI. This seems more difficult than it would have a year ago.

About the authors

Ananya Kumar is the deputy director, Future of Money, at the GeoEconomics Center.

Alisha Chhangani is an assistant director at the GeoEconomics Center

Related content

Explore the program

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

The post What drives the divide in transatlantic AI strategy? appeared first on Atlantic Council.

]]>
Daniels weighs the consequences of the US-China AI Race on Network 20/20 https://www.atlanticcouncil.org/insight-impact/in-the-news/daniels-weighs-the-consequences-of-the-us-china-ai-race-on-network-20-20/ Thu, 25 Sep 2025 14:57:20 +0000 https://www.atlanticcouncil.org/?p=877151 On September 17, Forward Defense nonresident senior fellow Owen Daniels was featured on the Network 20/20 Virtual Briefing Series alongside Janet Egan and Sam Winter-Levy.

The post Daniels weighs the consequences of the US-China AI Race on Network 20/20 appeared first on Atlantic Council.

]]>

On September 17, Forward Defense nonresident senior fellow Owen Daniels was featured on the Network 20/20 Virtual Briefing Series alongside Janet Egan and Sam Winter-Levy. The panelists discussed the ramifications and strategic implications of the US-China AI race and China’s rapid progress in AI development.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Daniels weighs the consequences of the US-China AI Race on Network 20/20 appeared first on Atlantic Council.

]]>
Global perspectives on AI and digital trust ahead of the Swiss e-ID referendum https://www.atlanticcouncil.org/blogs/geotech-cues/global-perspectives-on-ai-and-digital-trust-ahead-of-the-swiss-e-id-referendum/ Wed, 24 Sep 2025 18:19:43 +0000 https://www.atlanticcouncil.org/?p=876552 Both artificial intelligence and digital identity systems are increasingly shaping the future of how governments approach inclusion, equity, security, and interoperability in the digital age.

The post Global perspectives on AI and digital trust ahead of the Swiss e-ID referendum appeared first on Atlantic Council.

]]>

On September 28, Switzerland will vote in a national referendum on the introduction of a state-recognized electronic proof of identification, or e-ID. This referendum could fundamentally transform how Swiss residents access government services and engage with private sector platforms in an increasingly digital world. The new draft solution comes after the rejection of the e-ID Act in a March 2021 referendum, largely due to the control the Act would have given to the private sector. Under the proposed legislation, the federal government will be responsible for both issuing e-ID cards and operating the necessary technical systems, an approach designed to maximize privacy and data security.

Switzerland’s evolving approach to digital identity reflects a broader, global conversation about the intersection of technology, governance, and trust.

Earlier this summer, Switzerland was already at the center of global digital policy conversations. From July 8-10, Geneva hosted the AI for Good Global Summit, the United Nations’ flagship platform for leveraging artificial intelligence (AI) to address global challenges, organized by the International Telecommunications Union. The summit convened a diverse group of policymakers, researchers, industry leaders, and civil society to promote the development of AI standards, foster innovation, and maintaining robust safeguards for equity and inclusion.

While thousands gathered in Geneva, the Atlantic Council’s GeoTech Center hosted a more focused convening in Lausanne called “Bridging AI & digital policy: Global perspectives for a trustworthy future.” Held on July 10 at the Swiss security-printing company SICPA’s unlimitrust campus, the event brought together experts from government, industry, and academia for a half-day of dynamic discussions on AI and digital trust.

Shaping AI and digital trust

As technologies such as AI and digital identity systems shape the future of governance, security, and social services, ensuring their trustworthy development and equitable deployment is critical, particularly as nations weigh major policy decisions such as Switzerland’s upcoming e-ID vote. While much technological innovation is led by the Global North, the global majority, the world’s largest and most diverse population, holds the key to unlocking inclusive, ethical, and impactful digital solutions. This half-day event explored regulatory frameworks, innovations, and challenges for these technologies across both developed and emerging economies.

Philippe Amon, chairman and chief executive officer of SICPA and member of the Atlantic Council’s International Advisory Board, opened the event by highlighting the importance of AI today, stating that “AI is like oxygen.” His words set the tone for an afternoon of engaging and impactful dialogue on how AI and digital policy are reshaping trust, innovation, and global cooperation.

The first panel, “Swiss partnerships on AI: Innovating for a trusted future” was moderated by Graham Brookie, vice president of technology and strategy programs at the Atlantic Council. The panel examined how Switzerland is advancing digital trust and secure AI development. The panelists emphasized the importance of regional and global partnerships to advance the trusted, secure development and deployment of AI. They explored practical steps and the need for robust regulatory standards, sharing examples of Swiss initiatives and international partnerships that are driving innovation while remaining secure and trustworthy. One example included panelist Leila Delarive’s software development company, hoopit.ai, which was co-founded by Swiss and American partners with a shared focus on making knowledge more trusted, more secure, and more human. Speakers agreed that innovation must be tied to real-world outcomes, with one panelist, Jean-Christophe Makuch, head of digital research and innovation at SICPA, noting, “The question is not what are you doing, it’s what problem are you solving?”

In my capacity as an assistant director at the Atlantic Council’s GeoTech Center, I moderated the second panel, “Global digital ID landscape.” This panel examined current trends, barriers to adoption, and opportunities for a more inclusive and interoperable digital ID ecosystem, drawing from the GeoTech Center’s July report, “Exploring the global digital ID landscape.” Panelists discussed issues including public trust and interoperability challenges to gaps in digital access across emerging economies. The conversation also highlighted Switzerland’s upcoming referendum on the national e-ID, underscoring how the vote could establish a model for trust, privacy, and usability in digital identity systems. A major theme of the dialogue centered around usability of digital ID systems. Anantha Ayer, CEO of SwissSign said, “Why do we need a digital identity? I think if we answer that question and people see that, the adoption rate will go up.”

The final panel, “AI in the Global South,” was moderated by acting senior director and senior fellow at the Atlantic Council’s GeoTech Center, Raul Brens Jr.. The panelists discussed regional advancements, challenges, and opportunities for AI-driven development in emerging economies. Panelists highlighted the use of AI as a tool to enhance efficiency and emphasized the need for government and public-private collaboration. The panelists underscored that implementing AI in emerging economies will require capacity-building, robust data governance, and inclusive digital access. Kira Intrator, a principal at Civic Strategy Group, underscored the need for increased investment in AI development across the Global South, asking: “With the potential of AI, how can funders and donors think really creatively and really commit to making a difference?”

Reflections ahead of Switzerland’s e-ID vote

As the September 28 referendum approaches, the insights shared in Lausanne are increasingly important to consider. The conversations emphasized how both artificial intelligence and digital identity systems are increasingly shaping the future of how governments approach inclusion, equity, security, and interoperability in the digital age. From lessons learned in the Global South to evolving frameworks across Europe, it’s clear that collaboration between governments, industry, and civil society will be crucial to advancing these technologies effectively. Public trust is also at the forefront of shaping inclusive policies, and it will be essential to enhance transparency across the development and implementation processes. The future of AI and digital identity systems will be defined not just by how these technologies are used, but how securely and inclusively they are deployed.


Coley Felt is an assistant director at the Atlantic Council’s GeoTech Center.

Further reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Global perspectives on AI and digital trust ahead of the Swiss e-ID referendum appeared first on Atlantic Council.

]]>
How AI with ‘nurtured consciousness’ could transform warfare https://www.atlanticcouncil.org/blogs/new-atlanticist/how-ai-with-nurtured-consciousness-could-transform-warfare/ Thu, 18 Sep 2025 13:25:11 +0000 https://www.atlanticcouncil.org/?p=875136 New technologies have the potential to turn an information advantage into a conscious advantage, helping determine who has strategic dominance in the twenty-first century.

The post How AI with ‘nurtured consciousness’ could transform warfare appeared first on Atlantic Council.

]]>
The rise of large language models (LLMs) and multimodal foundation models has already begun to reshape the character of warfare. For evidence, look no further than the battlefields of Russia’s war on Ukraine. During “Operation Spiderweb” in June, for example, Ukrainian quadcopters switched to autonomous navigation assisted by artificial intelligence (AI) to strike multiple Russian airfields. After standard GPS and communication links were disabled by Russian jammers, built-in sensors and pre-programmed decision-making meant that “backup AI targeting” took over. The strike, Ukraine’s longest-range assault of the conflict to date, resulted in the destruction of billions of dollars’ worth of Russian aircraft.

But automation and data-processing speed—image identification, logistics, and pattern detection—are only one part of the story. An arguably more significant transformation is underway, toward synthetic cognition within AI systems.

Adversary simulation

The US Army’s Mad Scientist Initiative and NATO’s Strategic Foresight Analysis program have both identified AI-based adversary simulation as critical for preparing joint forces for contested decision environments. This involves mapping adversary biases, illuminating internal cognitive blind spots, and forecasting narrative-driven escalations. The idea is to promote what has been called “strategic empathy”—the disciplined effort to understand how adversaries perceive their interests, threats, and opportunities—and to reduce inadvertent escalation risks. 

Everyday AI chatbots such as GPTs are already spontaneously displaying the rudiments of theory of mind—that is, the ability to infer that others can hold beliefs different from one’s own. This capability has been demonstrated in LLMs through successful completion of false-belief tasks, such as recognizing that a person can search for an object where they mistakenly believe it to be, rather than where it actually is—a benchmark long associated with childhood cognitive development and a function regarded as unique to the species. In military contexts, if carefully constrained and validated, such capabilities are likely to soon allow for real-time simulation of adversarial logic, strategic ambiguity, and reputational calculus. 

The capacity to accurately interpret and anticipate adversaries’ behaviors and strategic intent may prove to be the ultimate determinant of cognitive overmatch, understood here as the demonstrable ability to emulate, predict, and outpace adversary decision cycles. In practice, this is measured in reduced decision time, greater accuracy in escalation forecasting, and validated against observed behavior in falsifiable scenario outcomes. In an era defined by the contest of perceptions, safely and successfully integrating synthetic cognition into defense capabilities may well prove decisive. As such, embedding cultural, historical, and ideological nuance into cognitive-emulative systems will be important to ensure strategic superiority for the United States. After all, China is already reportedly investing in culturally informed AI frameworks for military use. 

Taught versus nurtured consciousness

The crux of efforts to simulate adversarial reasoning emerges from a cognitive duality between taught consciousness and nurtured consciousness. This is not standard AI terminology, but a conceptual framework we have introduced to distinguish between two modes of reasoning. Taught consciousness refers to structured learning, facts, and procedural logic. Nurtured consciousness, by contrast, arises from culture, history, trauma, identity, and emotional reinforcement—the forces that shape how an actor interprets risk, legitimacy, and legacy.

To “think better,” AI must move beyond structured data alone; it must incorporate historical memory, cultural worldviews, symbolic interpretations, and ideological drivers of conflict. For example, a People’s Liberation Army (PLA) commander influenced by the 1979 Sino-Vietnam War may exhibit caution in mountainous terrain, a detail invisible to most automated models but accessible to LLMs trained on PLA memoirs, doctrine, and historiography.

As a recent report we both worked on details, military decisions are rarely made in isolation from personal or collective history. Strategy is often shaped by deep-seated narrative logic, encompassing national myths, identities, and ideology. Beyond procedural logic and battlefield geometry, war is fought through perception: how each actor experiences shame, fear, honor, legitimacy, and memory. These variables do not exist in intelligence, surveillance, and reconnaissance feeds or probability tables. They are present in the minds of adversaries, shaped by decades, if not centuries, of history, trauma, and political indoctrination. This is the cognitive substrate of strategic action, and it cannot be approximated through taught knowledge alone.

Consider the threat from jihadist groups such as the Islamic State of Iraq and al-Sham, or ISIS, and Boko Haram, which do not adhere to classical strategic logic; their behaviors are shaped by religious eschatology, historical grievances, and narrative theater. They use spectacular violence and ritualized fear to sustain their ideological appeal, often engaging in an epistemic war against perceived Western influence and employing brutality as part of the construction of identity. A purely data-driven model might focus on the number of fighters, frequency of attacks, or intercepted chatter while missing the symbolic logic animating those patterns. 

A system that incorporates cognitive elements layers in the importance of sacred geography, the modeling of theological escalation ladders where martyrdom is incentivized, and the role of online radicalization, where command structures are replaced by narrative contagion. Nurtured AI systems trained on religious texts, ideological manifestos, and martyr testimonials might be able to simulate the decision logic of these “nonrational” actors, providing predictive insights into, for example, when a symbolic event might trigger a suicide bombing, or when leadership decapitation may lead to fragmentation and the splintering toward more extreme offshoots.

Inhabiting the fog

Without nurtured consciousness, even the most advanced AI-driven systems risk failing to accurately interpret complex adversarial behaviors, symbolic intentions, and cultural thresholds, thereby undermining strategic effectiveness.

While taught consciousness enables a model to replicate tactical planning or doctrinal norms, nurtured consciousness simulates how a decision maker understands risk, perceives adversaries, and weighs personal legacy against national mythology. This is what allows an AI system to reason like a human in a real-world context, rather than merely replicating surface-level behavior. Combined, taught and nurtured consciousness deepen strategic empathy. 

However, as AI systems with synthetic cognition begin to dynamically shape military operations, they will require accountability frameworks, multidisciplinary oversight, and governance protocols. Failure to establish clear guidelines risks strategic misalignment, ethical ambiguity, and unanticipated escalation, ultimately weakening their utility and credibility. Therefore, cognitive-emulative systems must remain auditable, strategically aligned with values, and guided by transparent governance structures involving regional experts and ethicists to ensure responsible deployment. Given rapid advances by the United States’ near-peer adversaries, Washington needs technical and doctrinal oversight of nurtured consciousness, as well as clearly defined international norms governing its use.

Prussian General Carl von Clausewitz observed that “war is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.” Conscious-model AI does not dispel the fog, it inhabits it. It reasons, reacts, and remembers within it. This capability is what turns an information advantage into a conscious advantage, and it has the potential to set the standard for strategic dominance in the twenty-first century.


John James is a technologist, deep-tech investor, and founding partner of BOKA Capital Ltd, which has investments in military AI companies.

Alia Brahimi, PhD, is a nonresident senior fellow on the Atlantic Council Middle East Programs.

The post How AI with ‘nurtured consciousness’ could transform warfare appeared first on Atlantic Council.

]]>
Securing data in the AI supply chain   https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/securing-data-in-the-ai-supply-chain/ Fri, 05 Sep 2025 04:00:00 +0000 https://www.atlanticcouncil.org/?p=865321 To avoid lopsided AI policy, policymakers must see the data used and generated by AI as a chain, not a snapshot.

The post Securing data in the AI supply chain   appeared first on Atlantic Council.

]]>

Table of contents

Executive summary

Underpinning AI technologies is a complex supply chain—organizations, people, activities, information, and resources that enable AI research, development, deployment, and more. The AI supply chain includes human talent, compute, and institutional and individual stakeholders. This report focuses on another element of the AI supply chain: data. 

While a diversity of data types, structures, sources, and use cases exist in the AI supply chain, policymakers can easily fall into the trap of focusing on one AI data component at one moment (e.g., training data circa 2017), then switching focus to another AI data component next (e.g., model weights in current times), risking a lopsided policy that fails to take account of all the AI data components that are important for AI research and development (R&D). For example, overconfidence about which data element or attribute will most drive AI R&D can lead researchers and policymakers to skip past important, open questions (e.g., what factors might matter, in what combinations, and to what end), wrongly treating them as resolved. Put simply, a “one-size-fits-all” approach to AI-related data runs the risk of creating a regulatory, technological, or governance framework that overfocuses on one element of the data in the AI supply chain while leaving other critical parts and questions unaddressed. 

Managing the risks to the data components of the AI supply chain—from errors to data leakage to intentional model exploitation and theft—will require a set of different, tailored approaches aimed at achieving a comprehensive reduction in risk. As conceptualized in this report, the data in the AI supply chain includes the data describing an AI model’s properties and behavior, as well as the data associated with building and using a model. It also includes AI models themselves and the different digital systems that facilitate the movement of data into and out of models. The report, therefore, spells out a framework to visualize the seven data components in the AI supply chain: training data, testing data, models (themselves), model architectures, model weights, Application Programming Interfaces (APIs), and Software Development Kits (SDKs). 

It then uses the framework to map data components of the AI supply chain to three different ways that policymakers, technologists, and other stakeholders can potentially think about data risk: data at rest vs. in motion vs. in processing (focus on a data component within the supply chain and its current state); threat actor risk (focus on threat actors and risks to a data component within the supply chain); and supply chain due diligence and risk management (focus on a data component supplier or source within the supply chain and related actors). 

In doing so, it finds that many risks to AI-related data are risks to data writ large that existing best practices could mitigate. These include National Institute of Standards and Technology (NIST) and International Organization for Standardization (ISO) specified data access controls, continuous monitoring systems, and robust encryption; the risks at hand in these cases do not require reinventing the wheel. Simultaneously, this report also finds that some security risks to AI data components do not map well to existing security best practices that would adequately mitigate the risk or even apply at all. At least two stand out immediately: bad actors’ attempts to poison AI training data require data filtering mechanisms not well captured by existing measures, and which access controls or encryption would not appropriately mitigate; and emerging, malicious efforts to insert so-called neural backdoors into the behavior of neural networks require new security protections, too, beyond the realm of traditional IT data security. On top of implementing these two categories of mitigations, this report emphasizes that organizations can leverage “know your supplier” best practices to ensure all other entities in their AI supply chains have security best practices for both non-AI-specific and AI-specific data risks. 

This report concludes with three recommendations. 

  1. Developers, users, maintainers, governors, and securers of AI technologies should map the data components of the AI supply chain to existing cybersecurity best practices—and use that mapping to identify where existing best practices fall short for AI-specific risks to the data components of the AI supply chain. 
  2. Developers, users, maintainers, governors, and securers of AI technologies should “Know Your Supplier,” using the supply chain-focused approach to mitigate both AI-specific and non-AI-specific risks to the data components of the AI supply chain. 
  3. Policymakers should widen their lens on AI data to encompass all data components of the AI supply chain. This includes assessing whether sufficient attention is given to the diversity of data use cases that need protection (e.g., not just training data for chatbots but for transportation safety or drug discovery) and whether they have mapped existing security best practices to non-AI-specific and AI-specific risks. 

Introduction

Recent advances in computing power have catalyzed an explosion of artificial intelligence (AI) and machine learning (ML) research and development (R&D). While many of the mathematical and statistical techniques behind contemporary AI and ML models have been around for decades,1 these advancements in computing power have combined with larger datasets, energy sources, human labor, and other factors to bring AI and ML R&D to unforeseen heights. 

This phrase, “artificial intelligence,” is best understood not as a single, specific technology but as an umbrella term for a range of technologies and applications. Illustrating this point, companies, governments, academic institutions, civil society organizations, and individuals, among others, are designing, building, testing, and using AI and ML applications ranging from facial recognition systems in shopping malls and driving navigation systems in autonomous vehicles to chatbots in academic research environments to highly tailored applications in drug discovery, climate change modeling, and military operations.2 Despite wide variations in design and function, all these software applications, as such, characterize “AI.” Their variations capture the expansiveness of the “AI” term. They also underscore that research and policymaking on AI’s impacts—to labor, the environment, workforce productivity, economic growth, privacy, civil rights, national security, and so forth—must reference and differentiate between specific application areas, because they may greatly vary.

Underpinning AI technologies is a complex supply chain—organizations, people, activities, information, and resources enabling research, development, deployment, and more.3The AI supply chain includes human talent: the people around the world contributing to university and nonprofit research, building and iterating on commercial products, hacking systems to boost their security, applying deployed AI technologies in innovative ways, and so forth. It includes compute: the dynamic provisioning, protection, and management of hardware and software systems across shared infrastructure, in this case to power AI training, refinement, and so on—the subject of a forthcoming companion report from the Cyber Statecraft Initiative.4 It includes institutional and individual stakeholders, such as infrastructure providers, data providers, technology and service intermediaries, user-facing entities, and consumers.5 And the AI supply chain includes data components, which are the focus of this report. 

AI technologies are data-rich. That is, they both rely tremendously on data to function and produce large volumes of data as part of their operation. As explored in this report, this data richness entails a complex set of data elements in the AI supply chain that feed into, come out of, and underpin the research, development, deployment, use, maintenance, governance, and security of AI technologies. Corporate developers, researchers, and others building an AI application from the ground up may create an algorithm and run it on different kinds of “training data” before measuring its performance with “testing data.” For instance, in training an image recognition model to identify whether a photo contains a cat, the training data may be full of pictures of cats, dogs, airplanes, coffee machines, and cats sitting on coffee machines (i.e., “yes,” “no,” and more complex “yes” options), and the testing data might consist of similar pictures the model has never trained on, to test how well the function it learned generalizes to the new data. Individuals using AI chatbots or AI facial recognition models, to give another example, may upload data (e.g., questions, face images) into the system as part of using it, after which the system may provide data back to the individual (e.g., answers, names associated with faces) as well as output some metadata into a system log (e.g., performance metrics). These data components are just some of those present in the AI supply chain. 

Mapping and understanding this data in the AI supply chain matters greatly for companies, policymakers, and society to protect each data element against exploitation. Leaks, theft, exploitation, and adverse use of AI-related data could harm specific individuals or groups of people (e.g., extracting data from AI models to violate privacy); undermine specific national objectives like economic competitiveness (e.g., data theft to replicate proprietary applications) or national security (e.g., data theft to understand a model’s behavior, and thereby attack it); and create other issues ranging from market consolidation (e.g., single points of failure in the entity supplying key AI-related data) to undermining trust in critical technology areas (e.g., between patients and healthcare institutions). The US National Security Agency (NSA) recently wrote, “as organizations continue to increase their reliance on AI-driven outcomes, ensuring data security becomes increasingly crucial for maintaining accuracy, reliability, and integrity.”6

Conversely, each data element enables different aspects of AI research, development, deployment, use, maintenance, governance, and security—meaning developers, users, maintainers, governors, and securers of AI technologies should want to better safeguard them for positively framed reasons, too. Better protecting the data underpinning an expensive commercial AI advancement could enable the company to move faster without slowdowns due to leaks, breaches, and trade secret theft. Shielding training data related to individuals from inadvertent leaks and exposure could bolster public trust in responsibly executed AI deployments in healthcare or transportation. The list of benefits to mitigating leaks, theft, exploitation, and adverse use of AI-related data goes on for comparing specific data types and uses against relevant risk mitigations—optimizing the use of existing security best practices and identifying AI-specific gaps to fill. 

Without an effective framing for how to think about all the data in the AI supply chain, policymakers and others looking at AI and data security may overfocus on a single data component in the AI supply chain without accounting for all the others in the picture. They may also conflate related but distinct data components in the AI supply chain together, failing to account for differences in data type, structure, source, and use case that may create distinct risks and require different, tailored mitigations in response. 

Moreover, treating all AI-related data as part of a new, flashy set of AI technologies can perpetuate a sort of AI exceptionalism. This view suggests that AI technologies exist in isolation from cloud, telecommunications, and other systems—treating them as separate from, rather than interconnected with, other technologies that also matter for innovation, security, governance, and more. It can implicitly suggest the data is all new, raising fundamentally new questions and issues without good answers, instead of relating to data security discussions and best practices that have been around for decades, as well as a subset of data risks that demand AI-specific mitigations. None of these outcomes lend themselves to the most rigorous public policy, industry, research, and public discussions about AI R&D, security, and geopolitics. 

At a high level, the concept of data in the AI supply chain therefore enables analysts to map out points of concentration, resilience, and security vulnerability in AI systems and the overall AI ecosystem that might vary based on AI data type, structure, source, and use case. (For example, does cybersecurity-focused training data come from too few companies? How does the security of open-source health testing data compare to the security of health model parameters?) The concept of data components in the AI supply chain can help policymakers, developers, and those impacted by AI technologies understand the broader supply chain of parts underpinning a commercially successful, data-secure (or -insecure) AI system. And it can help governments, companies, civil society groups, journalists, and individuals to more precisely, systemically evaluate AI risk mitigation methods against the security risks of the coming years.7

A mapping and understanding of the data in the AI supply chain can also inform better policy. To be sure, no regulations of technology (or anything, for that matter) will treat the technology (or other thing) in question perfectly symmetrically across every country or jurisdiction in the world. But “AI regulations” based on highly inconsistent formulations of “AI-related data” can unintentionally increase friction. If a legislature writes rules for “AI data” when picturing only one type of training data, and another country’s legislature takes a more comprehensive view of all the data types and use cases in the AI supply chain, the widely varied approaches could make it harder to harmonize cross-border steps to curtail bad practices. The varied approaches could create cross-jurisdictional barriers to startup innovation that regulators never intended. And they could further confuse global discourse on governing “AI data.” These are potentially unintended effects that a better policy formulation on how to think about risks to and protections for AI-related data would avoid. 

This report lays out a conception of the data components of the AI supply chain, which the research then maps to existing data security and supply chain security best practices—highlighting existing measures that work well and identifying, in the process, security gaps for issues more unique to AI data types and use cases. The framework once again focuses just on data, rather than all elements of the AI supply chain (e.g., compute). For simplicity’s sake, it also excludes AI agents due to the complexity their permission-based, semi-autonomous functions introduce—instead, focusing on the wide range of non-agent models in use today. 

First, this report discusses how policymakers can run the risk of overfocusing on one data component at the expense of the entirety of data types in the AI supply chain, which can contribute at best to lopsided policy and at worst to tendencies that undermine US AI competitiveness and leave critical parts of the data in the AI supply chain inadequately secured. Second, it introduces a concept of data in the AI supply chain with seven components, each defined and exemplified below: training data, testing data, models (themselves), model architectures, model weights, Application Programming Interfaces (APIs), and Software Development Kits (SDKs). It additionally discusses the interactions between data components, their varied suppliers, and those suppliers’ sometimes shifting or multiple roles vis-à-vis the data in the AI supply chain. 

Finally, the report offers three different approaches to map data components in the AI supply chain to existing data security and supply chain security frameworks: data at rest vs. in motion vs. in processing (focus on a data component within the supply chain and its current state); threat actor risk (focus on threat actors and risks to a data component within the supply chain); and supply chain due diligence and risk management (focus on a data component supplier or source within the supply chain and related actors). These approaches can map concerns about training data theft, training data poisoning, API insecurity, and other data-related AI supply chain issues to established security controls and best practices from government agencies, standards bodies, cybersecurity literature, and areas like the financial sector and export control compliance. In doing so, it also begins to identify a few areas where existing security best practices may be insufficient for AI data risks—namely, confronting risks associated with the poisoning of data components in the AI supply chain and inserting neural “backdoors” into models through tampered training data or manipulation of model architectures. These risks, perhaps unique or relatively unique to AI models, require their own mitigations.   

The report concludes by making three recommendations: 

  1. Developers, users, maintainers, governors, and securers of AI technologies should map the data components of the AI supply chain to existing cybersecurity best practices—and use that mapping to identify where existing best practices fall short for AI-specific risks to the data components of the AI supply chain. In the former case, they should use the framework of data at rest vs. in motion vs. in processing and the framework of analyzing threat actor capabilities to pair encryption, access controls, offline storage, and other measures (e.g., NIST SP 800-53, ISO/IEC 27001:2022) against specific data components in the AI supply chain depending on each data component’s current state, the threat actor(s) pursuing it, and the traditional IT security controls the organization already has in place. In the latter case, developers, users, maintainers, governors, and securers of AI technologies should recognize how existing best practices will inadequately prevent the poisoning of AI training data and the insertion of behavioral backdoors into neural networks by manipulating a training dataset or a model architecture. They should instead look to emerging research on how to best evaluate training data to filter out poisoned data examples and how to robustly test network behavior and architectures to mitigate the risk of a bad actor inserting a neural backdoor, which they can activate after model deployment. And in both cases—of non-AI-specific and AI-specific risks to data—organizations can and should use the third listed approach of focusing on the data and supply chain itself to ensure their vendors, customers, and other partners are implementing the right controls to protect against risks of model weight theft, training data manipulation, neural network backdooring through model architecture manipulation, and everything in between, drawing on the two categories of mitigations they implement themselves. 
  2. Developers, users, maintainers, governors, and securers of AI technologies should “Know Your Supplier,” using the supply chain-focused approach to mitigate both AI-specific and non-AI-specific risks to the data components of the AI supply chain. Those sourcing data for AI systems—whether training data, APIs, SDKs, or any of the other data supply chain components—should implement best practices and due diligence measures to ensure they understand the entities sourcing or behind the sources of different components. For example, if a university website has a public repository of testing datasets for image recognition, language translation, or autonomous vehicle sensing, did the university internally develop those testing datasets, or is it hosting those testing datasets on behalf of third parties? Can third parties upload whatever data they want to the public university website? What are the downstream controls on which entities can add data to the university repository—data which companies and other universities then download and use as part of their AI supply chains? Much like a company should want to understand the origins of a piece of software before installing it on the network (e.g., is it open-source, provided by a company, if so which company in which country, etc.), an organization accessing testing data to measure an AI model or using any other data component of the AI supply chain should understand the underlying source within the supply chain. Best practices in know-your-customer due diligence, such as in the financial sector and export control space, and in the supply chain risk management space, such as from cybersecurity and insurance companies, can provide AI-dependent organizations with checklists and other tools to make this happen. Avoiding entities potentially subject to adversarial foreign nation-state influence, data suppliers not sufficiently vetting the data they upload, and so forth will help developers, users, maintainers, governors, and securers of AI technologies to bring established security controls to the data in the AI supply chain itself. In the case of both non-AI-specific and AI-specific risks to data, organizations can and should use this supply chain due diligence approach to ensure their vendors, customers, and other partners are implementing the right controls to protect against risks of model weight theft, training data manipulation, neural network backdooring through model architecture manipulation, and everything in between—drawing on the two categories of mitigations implemented as part of the first recommendation. 
  3. Policymakers should widen their lens on AI data to encompass all data components of the AI supply chain. This includes assessing whether sufficient attention is given to the diversity of data use cases that need protection (e.g., not just training data for chatbots but for transportation safety or drug discovery) and whether they have mapped existing security best practices to non-AI-specific and AI-specific risks. As multiple successive US administrations explore how they want to approach the R&D and governance of AI technologies, data continues to be a persistent focus of discussion. It comes up in everything from copyright litigation to national security strategy debates. The United States’ previous policy focus on training data quantity, and little else, has already prompted policymakers to avoid discussing comprehensive data privacy and security measures, which now—in light of Chinese AI advancements and concern about AI model weight dissemination—are suddenly more relevant. To avoid these cycles in the future, where policy overfocuses on one AI data element when in fact many are relevant simultaneously, policymakers should take a comprehensive view of the data components of the AI supply chain. The framework offered in this paper, spanning seven data components, is one potential guide—though again, policymakers need not stick to necessarily one framework. What is most critical to avoid is developing data security policies that protect some data components of the AI supply chain (e.g., training data) while leaving others highly exposed (e.g., APIs). An expanded view of the different data components, the components’ interaction, and the often multiple and shifting roles of suppliers should help inform better federal legislation, regulation, policy, and strategy—as well as engagements with other countries and US states. Right now, organizations such as the Congressional commerce committees, the Commerce Department (including because it implements export controls and the Information and Communications Services and Technologies supply chain program), the Defense Department (with all its current AI procurement), and the Federal Trade Commission (with responsibility for enforcing against unfair and deceptive business practices) should stress-test their assumptions about how to best protect AI data, and whether existing best practices achieve desired security outcomes, against this data component framework. This requires asking at least two questions. Do their existing security, governance, or regulatory approaches—e.g., in the security requirements used in Defense Department AI procurement, in how the Federal Trade Commission thinks about enforcing best practices for AI data security—apply well to a diversity of data use cases that need protection, such as with testing datasets for self-driving vehicle safety or training datasets for cutting-edge drug discovery? List out the use cases beyond chatbots that are not top of mind but are highly relevant from a security perspective, from defense to shipping and logistics to healthcare. And second, are they parsing out which risks they have concerns about, vis-à-vis AI-related data, that are specific to AI versus risks to data in general? For both categories, consider how the framework and some of the security mitigations cited in this report—for example, the NIST guidance, ISO practices, and new research on detecting neural backdoors, etc.—can serve as best practices to improve outcomes. 

Moving balls, swinging pendulums 

Policymakers, researchers, and private-sector firms alike are in constant debate about what kinds of data, data analysis, and data characteristics (such as quantity or diversity) will lead to major breakthroughs in AI research and development. These debates span geopolitical and national security issues—like fights over whether a country’s population and data collection reach may lend a strategic military advantage—and economic and social ones—like conversations about how best to maximize AI for medicinal innovations or minimize AI risks to worker privacy. Debates about AI and data implicate pressing and often broader issues such as tech innovation, responsible technology governance, cybersecurity, antitrust, and nation-state competition, too. 

But the past few years alone have illustrated how simplistic this AI and data debate can become—and how quickly, and perhaps arbitrarily, the metaphorical ball can move. About seven or eight years ago, it became somewhat of a prevailing view in Washington, DC that “data is the new oil” and that the volume of data to which a country had access would determine its AI might.8 Compelling perhaps because of its simplicity (data quantity is the key) and certainty (about the link between data quantity and AI leadership—and AI leadership and superpower status), the narrative quickly took hold, pushed by senior government officials and large Silicon Valley corporations alike.9 Policy, industry, and media discourse focused highly on one element of the links between data, broadly defined, and AI R&D: training data quantity.

Now, though, the ball has moved. Policymakers talk far less about training data (even though it is still important) and much more about model weights—numerical parameters that specify how the model represents connections to leverage between pieces of data to achieve the desired output. (They also, rightfully, spent much time discussing compute, but that is once again outside the scope of this report). Discussions about new export controls and the Biden administration’s last-minute,10 multi-tiered framework for (on paper) limiting “AI diffusion” are chief among the recent policy efforts focused on this slice of AI R&D,11 as are some of the Trump administration’s efforts to deregulate AI with the stated objective of boosting AI development. (Lots of discussion also focuses, of late, on compute, which is beyond the scope of this particular report.) The heavy focus on model weights has hit industry stock prices and valuations as well. When Chinese firm DeepSeek released a new model that it claimed beat ChatGPT’s performance, US AI firms lost about $1 trillion in valuation in 24 hours—amid the worry that other (Chinese) firms might easily replicate DeepSeek’s use of open-source model weights.12Training data is still important for AI R&D—for instance, in how valuable curating the right, often proprietary datasets is for building AI drug discovery models13—but policy focus and debate have shifted greatly to legal, technical, innovation, tech governance, and national security issues surrounding model weights.14

At the height of the previous focus on AI training data, some scholars and analysts, certainly, pushed back against a myopic focus on one part of data and AI R&D. Matt Sheehan wrote a piece for the Macro Polo think tank in July 2019 arguing that strategic AI development depends not just on data quantity but on data depth (aspects of behavior or events captured), quality (accuracy, structure, storage), diversity (heterogeneity of users or events), and access (availability of data to relevant actors).15 The industry-aligned Center for Data Innovation published an article in January 2018 critiquing the flaws of making that economic comparison from an innovation standpoint.16 Since that overfocus on AI training data, others have made points about the need for a broader view of AI’s competitive data factors, too. For instance, Claudia Wilson and Emmie Hine recently cautioned against export controls on open-source models (and elements like model weights), which could trigger an “unfettered” AI “race”17—while scholars like Kenton Thibaut point out the drawbacks of hyper-fixating on a single, “silver bullet” for AI leadership in general.18

Still, many DC think tank roundtables and policy discussions center on model weights. This is not to say there is no reason for concern about how to best secure model weights, including the model weights of US AI companies, against theft by Chinese actors.19 Trade secret theft is clearly a concern for US companies, as it is for the US government. Again, though, training data—and the importance of a range of types of training data (e.g., beyond just LLM training data to include training data used to power novel AI drug discovery models, etc.)—has taken somewhat of a backseat in recent policy conversations compared to model weights as well as other AI supply chain components beyond scope, notably compute.

However, the pendulum swing from focusing on training data quantity to focusing on model weights illustrates a few prevailing problems in policy and industry debates about AI and data. 

  • Overconfidence about which data element or attribute will most drive AI R&D can lead researchers and policymakers to skip past important, open questions (e.g., what factors might matter? in what combinations? to what end?), wrongly treating them as resolved. 
  • Oversimplified views of how data flows into and out of, constitutes, and powers AI models can lead policymakers to discuss “AI data” as one bucket of data to compete on, govern, and secure, rather than as many data types with different contexts.  
  • Over-fixation on a single, AI-related data component can guide policymakers and practitioners to treat the data component as a new, flashy “AI” phenomenon—overlooking existing security and risk mitigation frameworks and best practices, which many organizations may still not have implemented in the first place. 

A continued challenge for plenty of US policymakers and industry leaders is taking, and having the intellectual and economic space for, a more comprehensive assessment of the different kinds of data and data components that enable AI R&D.20 Focusing largely on training data quantity one moment and model weights the next can contribute to piecemeal, sometimes lopsided policy and often unfounded analytical assumptions. Take the example of protecting AI training data. When policymakers overfocused on AI training data and the idea that training data quantity matters most, many policy papers,21 alongside much industry lobbying,22 advocated for weak privacy laws so the United States could “beat” China—a country which, in this framing, has zero privacy restrictions or data limits whatsoever.23 Now that the conversation has shifted to model weights, however, much policy discourse has focused on how China’s restrictions on outbound data transfers lock down its technological advantages24—a sudden pivot in the conversation that now suggests the United States might benefit from some privacy laws in the first place.

This mall-moving, pendulum-swinging tendency can mean policymakers choose a single piece of the data in the AI supply chain to focus on myopically, which can cause policy to make more sudden lurches and miss opportunities to make longer-term investments in the security of all AI components. It can also cause policy narratives about AI to move in contradictory directions based on whichever slice of AI-related data is receiving the most attention at one given moment. When the policy focus centered on fueling training data quantity, some (as described above) talked about basic data privacy and security restrictions as harmful to technology development and the country. Yet these are precisely the kinds of policies that are helpful to protect against theft of and illicit access to model weights. 

To provide another framework for researchers, industry leaders, and especially policymakers to approach important data and AI debates—from the nature of the data components most likely to drive AI R&D, to the economic and national security risks of ungoverned access to AI-involved data—the next section lays out a data-focused concept to help widen the lens. 

Untangling the data in the “AI supply chain” 

The AI supply chain—organizations, people, activities, information, and resources enabling AI research, development, deployment, and more—is complex, shifting, and global. It involves several elements not covered in this report, such as human talent and compute, and it also includes the focus of this report: data. 

Just as “AI” is not a single technology but an umbrella term for a suite of technologies, the data relevant to AI research, development, deployment, use, maintenance, governance, and security is no single data type, source, or format, either. Instead, there are several data components in the AI supply chain (described below). These data components fit into the AI supply chain because researchers, developers, deployers, users, maintainers, governors, securers, and attackers of AI systems depend upon and get access to different kinds of data—transmitted, stored, and analyzed in different ways—to make it all happen. They also fit into the AI supply chain because a wide range of entities around the world—from individuals who publish self-labeled datasets to corporations that analyze AI model outputs—supply, access, and use the underlying data, too. This idea echoes the concept of AI as a value chain (referring to the business activities that deliver value to customers),25though focused specifically on data components. 

The data in the AI supply chain covers many data types, sources, and formats—all of which need secure safeguards to enable competition, boost public trust, and protect against the leaks, exploitation, and other risks delineated above. As conceptualized in this report, the data in the AI supply chain includes the data describing an AI model’s properties and behavior, as well as the data associated with building and using a model. It also includes the AI models themselves and the different systems that facilitate the movement of data into and out of models.26 Laid out in the visualized framework and tables below, this report conceptualizes seven parts of the data and core data systems in the AI supply chain: 

  • Training data 
  • Testing data 
  • Models (themselves) 
  • Model architectures
  • Model weights 
  • Application Programming Interfaces (APIs) 
  • Software Development Kits (SDKs) 

This paper draws some inspiration at a high level from the November 2024 paper by Qiang Hu and three other scholars on the large language model (LLM) supply chain, which envisioned a framework for thinking about the components and processes that go into LLMs.27 Nonetheless, this paper differs in focusing more on the data components themselves rather than the activities to produce them (like dataset processing); delineates data components by their properties and functional differences (such as distinguishing between training data and testing data); and looking at the data supply chain for AI technologies broadly (rather than just LLMs). This paper’s analysis also differs in that it focuses specifically on the need for security.  

Notably, the first five of these components are data per se, or the model itself. The last two, however—APIs and SDKs—are neither data nor models themselves; instead, they are code and software systems that enable data to pass into, extract from, and otherwise collect around AI models. For example, business users of an LLM may use an API to submit questions to the chatbot in batches; mobile consumers using an AI image recognition app may, whether they know it or not, depend on an SDK to take their snapshot of a bird and submit it to a cloud-hosted AI model, which then returns back through the SDK’s code the species of the bird in question. While the concept of data components of the AI supply chain does not list out every software system that could interact with AI data components, it includes APIs and SDKs because of their prevalence and their security relevance in delivering AI data to and from cloud systems and mobile devices; after all, virtually all the major AI commercial companies offer access to APIs to use their models (to include submitting queries to chatbots and uploading images to recognition models). 

Figure 1 lists each of the seven data components of the AI supply chain, defines them, and provides a few examples of the companies and other entities involved in that component or its sourcing. Again, this does not include several other elements of the AI supply chain (e.g., human talent, compute) and focuses mostly on traditional AI models (e.g., excludes AI agents). 

Figure 1: Data components of the AI supply chain

Many of the above seven data components of the AI supply chain can stand on their own. A computer system can house thousands of training data images in one folder and separately store hundreds of testing image examples in another folder; the files themselves, in a literal sense, are distinct. Similarly, a company can build and deploy an AI model with a paid API, through which users can query the model without making any SDKs available for developers to more easily integrate the model into software. Some websites make training data publicly and freely available without ever supplying model weights to their visitors, and some companies will provide elements of their data supply chains to purchasers for auditing, but typically not all of their underlying training data or every model weight. 

However, all seven data components have overlapping functional roles in the AI ecosystem. Academic researchers, government technologists, or startup developers looking to build a competitive healthcare image recognition model will need training data and testing data (including potentially rounds of training data, to fine-tune a model) to make it happen; without testing data, it is difficult to systemically evaluate a model’s performance so it can be tweaked, and without training data, there is no model to test and fine-tune. Companies that want to deploy their already built and tested models have many incentives to create both APIs and SDKs, so that different users working in different environments—whether a nontechnical lawyer looking to query a chatbot or a machine learning PhD looking to use the chatbot in their own app—can readily access the technology. 

The seven data components have overlapping suppliers, which are also geographically dispersed. Companies like Amazon Web Services, for example, store and make publicly available countless training datasets, including those from other parties.28 (AWS, in this specific example, also offers cloud services to government, companies, universities, civil society groups, and individuals to train, test, fine-tune, and deploy AI models.) Universities like Tsinghua University in China and the Indian Institute of Technology publish open-source AI models and the related data (e.g., training, testing data) as part of academic studies.29 Community-maintained websites like Kaggle, popular in the AI R&D community, host many kinds of training and testing data, and open-source platforms like GitHub host various datasets as well as models themselves, too. Simultaneously, these and many other suppliers of data components in the AI supply chain are consumers of the data components. Amazon uses training and testing data to build AI products and services; universities, such as Tsinghua and the Indian Institutes of Technology (IIT), publish study-linked datasets just as they may procure AI data components and related technologies (e.g., cloud services) to conduct the research in the first place.

And the data components in the AI supply chain themselves may interact with each other. (Again, this report does not include coverage of AI agents as explained above.) When a developer initially trains a model (using a training dataset) and then iterates on the model by testing it (using testing data) and fine-tuning it further (using more training data), the resulting model and the model weights are in part the byproduct of the training and testing datasets used. The model architecture selected before sourcing the training data will likewise influence what the model weights and resulting model ultimately look like—as well as the resulting model’s data inputs and outputs via an API or SDK. Similarly, when a company acquires a certain training dataset and uses it to train a model with specified parameters, it shapes the nature of which testing data and additional training data it will subsequently source from the supply chain. If the company wants a model to be completely open-source, for instance, then it will need to select or construct datasets of only open-source testing and training data from the data in the AI supply chain; if the company elects to go with Portuguese-language training data for building a voice-to-text AI chatbot, it will need Portuguese-language testing data, perhaps even sourced only from Brazil, to evaluate the initial model’s behavior. These are additional ways in which interactions between data components of the AI supply chain can impact data sourcing decisions.  

Even nation-states looking to secure their respective AI systems and potentially steal or compromise those in other countries may need to consider everything from safeguarding the models themselves (and all the data and weights within them) against exploitation to identifying sensitive testing datasets that need protection. These AI data components are distinct in the framework above. But their overlapping and interdependent roles in AI R&D make them collectively integral to understanding AI competitiveness and innovation—and how to ensure robust, effective governance across safety, security, privacy, trust, and much more. The concept of a supply chain, as in other areas like manufacturing, helps to drive analysis towards the interaction and interdependence of the various subcomponents and their suppliers. None truly can stand alone.  

Instead of the policy and analytic pendulum swinging from one area (like training data) to another (like model weights) with underappreciation for the broader landscape, this framework and the functional overlaps between components make clear that strategic competition and governance over AI and data cannot myopically focus on one element. Doing so leads to the analytic issues laid out in the last section and detracts from the complex, entangled nature of the data supply chain components that are relevant to AI research, development, deployment, use, maintenance, governance, and security. Policymakers only increase the likelihood of missing major opportunities and risks. Hence, with this foundation, the next section uses the framework of data in the AI supply chain above to zoom in on the security risks facing the data components in the AI supply chain—to illuminate what organizations and policymakers might do about them.

Parsing the risks—and pursuing better security

Policymakers, technologists, and others working on AI (e.g., on governance) can use the framework from the last section to map data components in the AI supply chain, in different states and contexts, to security controls and risk mitigations. This section describes and details how such a process would work across three different approaches. Using the framework to parse risks enables individuals and organizations to identify the best existing practices to leverage in protecting AI data components. In some cases, this may save organizations time and money if they already have the security controls and risk mitigations in place elsewhere—and even if organizations have not yet implemented the existing controls and mitigations for non-AI systems and data, they do not need to create the controls and mitigations from scratch. Related, the framework can also help individuals and organizations to identify gaps in existing best practices—and, as exemplified in the below discussion, think about how new security controls or risk mitigations could be developed and used to address AI-specific data risks. 

As alluded to earlier, better security across the data components of the AI supply chain can mitigate risks of breaches and data interception, shield data and resulting AI technologies from theft (including by competitors), enhance protections for individuals’ privacy, bolster public trust, limit organizations’ liability risk, and strengthen US national security. Lapses in security across the data components of the AI supply chain, however, can contribute to universal problems such as data breaches and interceptions, intellectual property theft, privacy leaks and violations, and undermined public trust in AI technologies—as well as US-specific issues, such as better enabling governments adversarial to the US to hack data or infiltrate US technology supply chains. A methodological approach to this risk mapping can help organizations mitigate risk and help policymakers develop more rigorous, tailored policies on AI and data security. 

Leveraging the last section’s framework, this section evaluates three different approaches to mapping the data components of the AI supply chain to security controls and risk mitigations. The first looks at the state of a data component of the AI supply chain: is the data at rest, in motion, or in processing? The second looks at the threat actors with an interest in the AI supply chain and its data components: what are the threats, vulnerabilities, and consequences? And the third looks at the interaction of data components of the AI supply chain and the suppliers: who are the suppliers, and what are their security controls—or risks? 

A recent paper on how to enhance third-party flaw disclosures for AI models argues that the AI sector has much to learn from software security.30 This section follows in similar spirit. Instead of reinventing the wheel, these three approaches to data security in the AI supply chain help map complex questions about data in the AI supply chain to existing data security and supply chain security best practices. Then, where existing security controls and risk mitigations are insufficient for AI-specific risks to data—at least two of which are spotlighted below—these three approaches can help illuminate where new, AI-specific mitigations are needed. Figure 2 summarizes the three approaches, ahead of the more detailed discussion that follows.

Figure 2: Three potential approaches to securing data in the AI supply chain 

Approach one: Understand the ‘state’ of data

First, five of the seven data components of the AI supply chain (excluding APIs and SDKs, as they are not data per se) can be in different data states at different times. Each of these may come with specific security risks, under three states, commonly described as: “data at rest” (e.g., model weights sitting on a server, though not in use), “data in motion” (e.g., training data downloading from a website to a local machine), and “data in processing” (e.g., testing data feeding into an initially trained model). Cybersecurity professionals, when building organizational policies, programs, and processes, often apply this framework—at rest vs. in motion vs. in processing—to understand risks to data and mitigate them. AI-related data at rest, for instance, can be siphoned from databases by a hacker or sit exposed on a public server with no password, ready for anyone to download, because it was not subject to proper encryption and protection. This could enable criminals to target people in the data with scams or sell the data on the dark web. AI-related data in motion, similarly, that is weakly encrypted or entirely unencrypted could be intercepted by a nation-state as it moves from a cloud system through an API, enabling intelligence-gathering or intellectual property theft. 

Each of these data states may require different kinds of encryption, different levels of access controls for employees, and so on. Perhaps one security team is responsible for protecting a stored training dataset, while a research team is the only one authorized to modify the training dataset; the same data thus requires different security measures, such as different kinds of encryption and access control rules, when stored compared to when undergoing modification. A state agency or company may choose to implement a certain kind of robust encryption on data at rest when access or modifications are unnecessary but leave it unencrypted while in processing, or only encrypt it in very specific ways that still enable computation (i.e., while in processing).31 Focusing security measures only on the data component in question (e.g., is it testing data or training data?) will fail to account for the ways a piece of data’s current state impacts the risks to the data in that moment and the security measures to apply to it.

This framework—at rest vs. in motion vs. in processing—is therefore an effective means of tying classes of risks to the data components of the AI supply chain to specific, existing risk mitigations. Rather than assuming that the data components of the AI supply chain need entirely different protections because they are “AI-related,” leveraging this framework contextualizes risks to the data components within broader risks to data, AI-related or not. For example, one of the National Institute of Standards and Technology’s many security best practices focuses on “Protection of Information at Rest.” The security control, known as SP 800-53: SC-28, delineates three components: cryptographic protection for “information on system components or media” as well as “data structures, including files, records, or fields”; offline storage to eliminate the risk of individuals gaining unauthorized data access through a network; and using hardware-protected storage, such as a Trusted Platform Module (TPM), to store and protect the cryptographic keys used to encrypt data. 32 Universities attempting to secure health training datasets on a department computer, companies looking to prevent hackers from stealing model weights sitting on a cloud server, or government agencies hoping to protect testing data from spies can all use these techniques to protect the data components of the AI supply chain while they are at rest.

Specific data components in motion and in processing, as captured in Figure 3, can likewise be mapped to specific NIST and other security best practices. NIST SP 800-53: SC-08, “Transmission Confidentiality and Integrity,” specifies cryptographic protection, pre- and post-transmission security measures, how to conceal or randomize communications, and other steps 33that a civil society group could take to secure AI model weights it sends to a federal funder agency; the agency’s Cybersecurity Framework: PR.DS-10 control focuses on the confidentiality, integrity, and availability of data in use (i.e., in processing) and has many related controls such as account management, access enforcement, monitoring for information disclosure, system backups, cryptographic protections, and process isolation,34 that a corporation could implement for all its independent contractors building an LLM.35 This approach enables entities to identify a data component in their AI supply chain, understand its state, and map that state to best practices from NIST, the Systems and Organization Controls (SOC) 2 framework,36 and other data security compliance guidelines.

These controls could vary not just based on the data state (at rest vs. in motion vs. in processing) but on the type of data, its source, and its context. For instance, companies interested in protecting larger model weights could turn to security measures intended for larger datasets, such as tight access controls, two-party authorization for data access, and endpoint software controls.37 Companies might use this framework to arrive at a stronger level of security controls for larger model weights, in all data states, than they would apply to smaller, less sensitive training datasets.

At the same time, this mapping demonstrates ways in which existing security controls and risk mitigations may not address all AI-related data risks. Take the poisoning of an AI model, where bad actors attempt to insert “bad” data into training data, such as data that could cause serious errors or vulnerabilities if used to train a specific AI model.38 If an organization scrapes training data from the internet (i.e., data in motion), imposing confidentiality and integrity controls on the scraped data would only catch modifications to the data after collecting it—not detect whether the data uploaded in the first place was poisoned from the start. If an organization is trying to ensure the security of that training data after scraping (i.e., data at rest), to give another example, encryption and access control measures could help to mitigate the risk of post-scrape tampering of data stored on the organization’s systems. These measures would again fail, however, to protect the organization from scraping data that was compromised from the outset. While this is an intentionally simplistic discussion of AI poisoning, it underscores that traditional IT security measures for protecting data at rest, in motion, and in processing may not fully mitigate all risks to data in the AI supply chain. In this case, policymakers and organizations can look to guidance from NIST that explains types of poisoning attacks and potential mitigations—such as differential privacy applied to datasets and data sanitization techniques to remove poisoned samples of data before using the dataset for AI model training.39

Figure 3: Approach one illustrated

Approach two: Assess threat actor profile

Second, different threat actors as well as unwitting individuals (e.g., employees deceived by a phishing note, users making weak API passwords, etc.) can take actions that undermine the security of data components of the AI supply chain—or the security of a specific data component. Instead of focusing on the state of data components at risk, the developers, users, maintainers, governors, and securers of AI technologies can focus on the threat actors and scenarios themselves. Established threat actor risk frameworks can enable those entities and individuals to identify risks to the data in the AI supply chain, map an adversary’s capabilities against known mitigations, and prioritize the security measures that are the most urgent. These mitigations can be specific not just to a data component’s current state, but to any threat actor in question. 

Having a threat actor-driven risk approach is essential for companies, universities, nonprofits, government agencies, and other organizations and groups involved with developing, using, maintaining, governing, and securing AI technologies and data. Focusing on technical mitigations, such as encrypting data at risk, can help organizations prioritize their biggest technological or process vulnerabilities internally, but they do little to help the organization understand which threat actors have an interest in which of their datasets. Using the first approach described above can help an organization to shore up its own defenses, but knowing which actors are the biggest threat to an organization—and what capabilities they bring to bear—might shift which security controls and risk mitigations are the biggest priority; threat actors could focus on stealing unexpected datasets, for instance, or have far better ability to poison training data than a university or corporate research lab might appreciate. While it is again not the only approach, centering threat actors and their capabilities is another lens through which to approach securing the data components of the AI supply chain. 

Among many other risk assessment frameworks in the world, the US government often uses the framework of risk as a function of threat, vulnerability, and consequence.40 Threats are composed of an adversary’s intentions and capabilities.41 Vulnerabilities are weaknesses inherent to a system (e.g., due to poor coding practices, interactions between components, or simply the inevitability of human error in a complex software system) or that have been introduced by an outside actor.42 And consequences, in this framework, are outcomes that are either fixable or “fatal”.43

Developers, users, maintainers, governors, and securers of AI technologies can use this approach to understand how different data components of the AI supply chain may be at risk. Because many of the threat actors targeting data components of the AI supply chain—from nation-states to cybercriminals—are often already on the radar of large and boutique cybersecurity firms, organizations can use existing threat data to render their assessments. From there, they can look to industry best practice guides such as International Organization for Standardization (ISO) controls and standards from organizations like NIST to mitigate risks most appropriately44—rather than taking a one-size-fits-all approach to a diversified threat and security landscape.

For example, a medium-sized research university might worry about an industrial cyber espionage firm targeting its large AI health training data repository. If the university knows that the group has strong financial motive (intent), that it is highly sophisticated at penetrating network edge devices, insecure routers, and mobile devices but does not have the ability to decrypt large datasets (capabilities), and that the university has far too many connected devices and routers to achieve adequate security (vulnerabilities), the university may avoid a high-impact theft of the data (consequence) by choosing to encrypt the data, store it offline whenever possible, and securely isolate the encryption keys. The robust encryption and the shift towards offline storage could minimize the likelihood that the firm is able to steal the data—and minimize the likelihood they could make use of the data even if they did manage to steal it. 

If a leading US AI startup, to give another example, worries about a Chinese military hacker stealing its image recognition model weights, it could also apply this threat actor risk framework. The startup might suspect that the task is to steal its technology (intent), know that the hacker is highly sophisticated at network penetration and decryption of data (capabilities), and feel it has locked down its user accounts with strong passwords and multifactor authentication, but that its wide vendor and contractor base introduces many points of entry into its technological supply chain (vulnerability). As the firm is pre-revenue, its executives are of the view that a Chinese competitor getting a copy of its proprietary model weights and beating it to market could put the company out of business (consequence). Well aware that standard cybersecurity measures may not be enough to stop the military hacker’s capabilities, the startup may choose to invest in even more advanced mechanisms—partitioning systems; storing numerous false copies of model weights that purport to be the real thing; moving training datasets and testing datasets already used into offline storage45—to ensure that its model weights are as well-protected as possible.

These are insights that would not be as obvious were the hypothetical medium-sized research university or the hypothetical US AI startup to focus only on the current state of the training data or model weights in question (i.e., at rest vs. in motion vs. in processing). Understanding the threat actor itself was necessary to identify the most appropriate mitigations based on the varied capabilities brought to bear, data targets of interest to the adversary, and consequences of a security incident. Figure 4 lays out this approach. 

As with the prior section, some risks to data components of the AI supply chain are unlikely to be adequately addressed with existing security controls and risk mitigations. Poisoning of AI models may already require unique or relatively unique security requirements, such as filtering mechanisms to screen for poisoned data once a dataset is scraped from the internet or otherwise assembled. That is especially the case—applying this framework—when dealing with threat actors that are well-resourced, sophisticated, and persistent, such as the Chinese government. The amount of resources potentially put into attempting to poison specific datasets may require enhanced planning for the threat at hand and the consequences of the threat unfolding. 

Similarly, a sophisticated threat actor may have the capability and time to focus not just on poisoning a training dataset broadly (as discussed in the first approach subsection), but on creating what some call a neural backdoor: tampering with training data to embed a vulnerability in a deep neural network, so that the trained model does not behave erroneously or harmfully in response to standard events, but hides its learned, malicious behavior until it encounters a highly specific trigger.46 Ongoing research looks at how to tailor protections to training data under very specific assumptions about the bad actor’s approach;47 hence, a threat actor framework may provide more useful information to an organization attempting to defend against sophisticated attempts at neural backdoors. (Like with defending against poisoning attempts, encryption measures or access controls on training data at rest, in motion, or in processing are not going to mitigate these kinds of highly specific risk scenarios.) Still, more research is needed to understand generalized defenses against attempts to backdoor neural networks—including in areas that get relatively less attention than others (i.e., images getting more attention than video).48Guo, Tondi, and Barni, “An Overview of Backdoor Attacks, 20.49 More advanced mechanisms to filter training data are one promising set of approaches to address this type of risk.50

Recent work shows that bad actors can tamper not just with training data in the AI supply chain, to create de facto behavioral backdoors in neural networks, but can do so by manipulating model architectures in the AI supply chain as well.51 Again, taking a threat actor-focused view of the risks enables policymakers and organizational security experts to game out the risk scenarios—and how exactly attempts at architectural backdoors might unfold, offering insight hints on how to plan for and potentially prevent them in advance.

Figure 4: Approach two illustrated 

Approach three: Map suppliers to the supply chain

Third, the large number of suppliers of data in the AI supply chain means that security risks can come from suppliers themselves. These risks can take at least two forms: actors within the AI supply chain—such as groups of researchers uploading training data or finished models to websites—deliberately poisoning data or inserting malicious code to compromise others who use those components downstream; and actors within the AI supply chain having poor security practices that enable security risks to spill over to others. The latter of these risk categories could range from an AI service provider pushing API code to hundreds of customers, while not requiring employees to use multifactor authentication, to a volunteer research group unknowingly scraping inaccurate websites to build a flawed training dataset for a chatbot intended for security questions and login verification. Unlike focusing on the current state of a data component of the AI supply chain or the threat actor targeting a specific data component, this potential approach focuses on data security across an organization’s AI supply chain and relevant suppliers. 

Developers, users, maintainers, governors, and securers of AI technologies can draw on existing supply chain security best practices used in everything from export control regulations to compliance with the EU’s General Data Protection Regulation (GDPR). These broadly fall under the bucket of “Know Your Supplier.” First is knowing one’s suppliers to identify ownership, country of incorporation, and any other signals of potential illegality (e.g., a criminal front set up to scam others) or potential susceptibility to nation-state influence (e.g., ownership by a foreign government). Organizations such as government agencies, companies, and universities can look up the suppliers of their training and testing datasets, ensure the software engineers hired to build their APIs and SDKs are reputable, and check with AI developers about the sources of their training data. (As discussed below, the last of these may be practically difficult but is worth mentioning as a potential security practice.) This could help to identify risks such as unknowingly sourcing data or models from a Chinese military-linked university or hiring freelance data labelers previously involved with illicit hacking. 

Second, as compelled by vendor security requirements under GDPR,52 is ensuring an organization’s vendors and supply chain do not have weak security that voids the organization’s security efforts. This is likewise a concern for AI-dependent organizations that are typically dependent on a highly complex, often globalized, and frequently shifting data supply chain. AI developers, users, maintainers, governors, and securers can therefore implement contractual requirements for data security wherever possible. They can ensure vendors and customers receive adequate training on how to handle different data components that the organization might pass their way. And they can conduct or request independent audits from cloud companies deploying their AI models, data cleaning firms formatting their unstructured training data, and other entities in their supply chains. There are plentiful resources to draw on for such efforts. Financial sector know-your-customer guidelines,53 US government advisories on supply chain vetting,54 and cybersecurity and insurance readings on mitigating third-party cybersecurity risks,55 among others, can help give AI organizations tools to understand the data-involved actors in their AI supply chains and what risks might result. Such measures can help developers, users, and others to avoid relying on a cloud vendor whose security posture is wholly insufficient to match up against a threat actor the organization has worries about, or help them steer away from hiring API developers at a software support firm that has suffered repeated, simple security breaches. As some have suggested, this could include asking vendors and others in an organization’s supply chain for an AI bill of materials, or “AI-BOM” (modeled after a software bill of materials, or SBOM)56

Seriously complicating many of these efforts, of course, is just how much of the data in the AI supply chain either touches highly opaque corporate components—or depends on datasets uploaded to freely available websites with often unclear chains of custody and potentially obscured origins. Again, however, the solution is not to immediately treat these situations as unique without examining the applicability of existing best practices to specific risks. Some of the same could apply to vendors from which companies buy their software, which may not wish to make source code available to customers or to provide potential buyers with comprehensive lists of all the contractors, vendors, and dependent software packages on which their products and services were built. Much of the same also pertains to open-source software, too, where companies and developers must come up with ways to manage the risks that arise from a subset of less-well-maintained software or from otherwise well-maintained software that a threat actor compromises.57 In both cases, companies can use audits, continuous monitoring, and other measures to still mitigate risk from complex supply chains. 

Just as heavily software-dependent companies should conduct due diligence on and assess their prospective vendors’ cybersecurity (and their vendors’ cybersecurity, and so on) to avoid major security lapses and vulnerabilities, AI organizations can do some degree of risk assessment and mitigation for the data suppliers in the AI supply chain (Figure 5). 

These supply chain security measures can be applied to AI-specific data risks, too, where existing best practices fail to properly secure data components of the AI supply chain. Companies can require that new partners and vendors attest to the measures they take to mitigate risks of poisoning of training data, such as by carrying out appropriate data filtering.58 Universities could evaluate their data dependencies in their AI supply chain and, in doing so, catalogue instances where organizations do not mention poisoning in their data security plans. Government agencies could ensure that any company pitching them on a contract for an externally hosted or already trained, to-be-internally-migrated AI model spell out the steps they have taken to test for and mitigate potential threats of neural backdoors (either through training data or model architectures)—which may be exactly the kind of flaw a nation-state would want moved down the supply chain into a government computer system. Here, the concept of focusing on the supply chain itself can be effective in helping shield against both widely held and AI-unique risks to data components across the AI supply chain.

Figure 5: Approach three illustrated 

Conclusion and recommendations

The fact is governments, companies, universities, individuals, and others pursuing AI R&D will not stop collecting, labeling, disseminating, using, and producing tremendous volumes of data. Therefore, security of the data in the AI supply chain remains evermore paramount to mitigating risks of breaches and data interception, shielding data and resulting AI technologies from theft (including by competitors), enhancing protections for individuals’ privacy, bolstering public trust, limiting organizations’ liability risk, and strengthening US national security. 

Breaking down the seven data components of the AI supply chain can enable developers, users, maintainers, governors, and securers of AI technologies to understand the data components they depend upon, how they interact with and relate to one another, and the varied sources and entangled suppliers. It can then empower organizations to take one of many different existing data security and supply chain security approaches—including data at rest vs. in motion vs. in processing, threat actor risk, and supply chain due diligence and risk management—to map their concerns about data in the AI supply chain to specific, established best practices. More broadly, however, the concept of data in the AI supply chain promises something else for policymakers: the ability to see the whole data supply chain picture at once, leading to more cohesive policymaking. 

This paper makes the following three recommendations: 

  1. Developers, users, maintainers, governors, and securers of AI technologies should map the data components of the AI supply chain to existing cybersecurity best practices—and use that mapping to identify where existing best practices fall short for AI-specific risks to the data components of the AI supply chain. In the former case, they should use the framework of data at rest vs. in motion vs. in processing and the framework of analyzing threat actor capabilities to pair encryption, access controls, offline storage, and other measures (e.g., NIST SP 800-53, ISO/IEC 27001:2022) against specific data components in the AI supply chain depending on each data component’s current state, the threat actor(s) pursuing it, and the traditional IT security controls the organization already has in place. In the latter case, developers, users, maintainers, governors, and securers of AI technologies should recognize how existing best practices will inadequately prevent the poisoning of AI training data and the insertion of behavioral backdoors into neural networks by manipulating a training dataset or a model architecture. They should instead look to emerging research on how to best evaluate training data to filter out poisoned data examples and how to robustly test network behavior and architectures to mitigate the risk of a bad actor inserting a neural backdoor, which they can activate after model deployment. 
  2. Developers, users, maintainers, governors, and securers of AI technologies should “Know Your Supplier,” using the supply chain-focused approach to mitigate both AI-specific and non-AI-specific risks to the data components of the AI supply chain. Those sourcing data for AI systems—whether training data, APIs, SDKs, or any of the other data supply chain components—should implement best practices and due diligence measures to ensure they understand the entities sourcing or behind the sources of different components. For example, if a university website has a public repository of testing datasets for image recognition, language translation, or autonomous vehicle sensing, did the university internally develop those testing datasets, or is it hosting those testing datasets on behalf of third parties? Can third parties upload whatever data they want to the public university website? What are the downstream controls on which entities can add data to the university repository—data which companies and other universities then download and use as part of their AI supply chains? Much like a company should want to understand the origins of a piece of software before installing it on the network (e.g., is it open-source, provided by a company, if so which company in which country, etc.), an organization accessing testing data to measure an AI model or using any other data component of the AI supply chain should understand the underlying source within the supply chain. Best practices in know-your-customer due diligence, such as in the financial sector and export control space, and in the supply chain risk management space, such as from cybersecurity and insurance companies, can provide AI-dependent organizations with checklists and other tools to make this happen. Avoiding entities potentially subject to adversarial foreign nation-state influence, data suppliers not sufficiently vetting the data they upload, and so forth will help developers, users, maintainers, governors, and securers of AI technologies to bring established security controls to the data in the AI supply chain itself. In the case of both non-AI-specific and AI-specific risks to data, organizations can and should use this supply chain due diligence approach to ensure their vendors, customers, and other partners are implementing the right controls to protect against risks of model weight theft, training data manipulation, neural network backdooring through model architecture manipulation, and everything in between—drawing on the two categories of mitigations implemented as part of the first recommendation. 
  3. Policymakers should widen their lens on AI data to encompass all data components of the AI supply chain. This includes assessing whether sufficient attention is given to the diversity of data use cases that need protection (e.g., not just training data for chatbots but for transportation safety or drug discovery) and whether they have mapped existing security best practices to non-AI-specific and AI-specific risks. As multiple successive US administrations explore how they want to approach the R&D and governance of AI technologies, data continues to be a persistent focus of discussion. It comes up in everything from copyright litigation to national security strategy debates. The United States’ previous policy focus on training data quantity, and little else, has already prompted policymakers to avoid discussing comprehensive data privacy and security measures, which now—in light of Chinese AI advancements and concern about AI model weight dissemination—are suddenly more relevant. To avoid these cycles in the future, where policy overfocuses on one AI data element when in fact many are relevant simultaneously, policymakers should take a comprehensive view of the data components of the AI supply chain. The framework offered in this paper, spanning seven data components, is one potential guide—though again, policymakers need not stick to necessarily one framework. What is most critical to avoid is developing data security policies that protect some data components of the AI supply chain (e.g., training data) while leaving others highly exposed (e.g., APIs). An expanded view of the different data components, the components’ interaction, and the often multiple and shifting roles of suppliers should help inform better federal legislation, regulation, policy, and strategy—as well as engagements with other countries and US states. Right now, organizations such as the Congressional commerce committees, the Commerce Department (including because it implements export controls and the Information and Communications Services and Technologies supply chain program), the Defense Department (with all its current AI procurement), and the Federal Trade Commission (with responsibility for enforcing against unfair and deceptive business practices) should stress-test their assumptions about how to best protect AI data, and whether existing best practices achieve desired security outcomes, against this data component framework. This requires asking at least two questions. Do their existing security, governance, or regulatory approaches—e.g., in the security requirements used in Defense Department AI procurement, in how the Federal Trade Commission thinks about enforcing best practices for AI data security—apply well to a diversity of data use cases that need protection, such as with testing datasets for self-driving vehicle safety or training datasets for cutting-edge drug discovery? List out the use cases beyond chatbots that are not top of mind but are highly relevant from a security perspective, from defense to shipping and logistics to healthcare. And second, are they parsing out which risks they have concerns about, vis-à-vis AI-related data, that are specific to AI versus risks to data in general? For both categories, consider how the framework and some of the security mitigations cited in this report (e.g., NIST, ISO, new research on detecting neural backdoors, etc.) can serve as best practices to improve outcomes. 

The more governments, companies, civil society groups, individuals, and others move AI technologies into areas ranging from e-commerce, social media, and business administration to manufacturing, healthcare, transportation, and defense, the more important it becomes to secure all data related to AI technologies. A complex set of data components in the AI supply chain demands policy and security practices that account for the entire supply chain and all its complexity at once, or at least through piecemeal efforts that add up to the whole—making existing data security and supply chain security best practices, paired with newer responses to AI-specific data risks, an optimal place to start. 

About the author

Justin Sherman is a nonresident senior fellow at the Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. His work at the Atlantic Council focuses on cybersecurity policy, data security, digital public infrastructure, physical internet infrastructure such as submarine cables, and AI supply chains. His work also involves China and a range of issues related to Russia.

Sherman is the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm. He is also an adjunct professor at Georgetown University’s School of Foreign Service and a distinguished fellow at Georgetown Law’s Center on Privacy and Technology.

Acknowledgements

The author would like to thank Trey Herr, Nitansha Bansal, Kemba Walden, Devin Lynch, Harriet Farlow, Ben Goldsmith, and Kenton Thibaut for their comments on earlier drafts of this report, as well as all the individuals who participated in background and Chatham House Rule discussions about issues related to data, AI applications, and the concept of an AI supply chain. 

Explore the program

The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    This is not true in every single case. See, for example, Ashish Vaswani et al., “Attention Is All You Need,” arXiv, June 12, 2017 [last revision, August 2, 2023], https://doi.org/10.48550/arXiv.1706.03762.
2    Such uses are not inherently positive for the rigor (e.g., result accuracy, reproducibility, etc.) of academic research. See: Miryam Naddaf, “AI Linked to Explosion of Low-Quality Biomedical Research Papers,” Nature 641, no. 8065, (May 21, 2025): 1080–81, https://doi.org/10.1038/d41586-025-01592-0.
3    See: NIST’s definition of “supply chain” (source: CNSSI 4009-2015). “Glossary: Supply Chain,” US National Institute of Standards and Technology: Computer Security Resource Center, accessed August 26, 2025, https://csrc.nist.gov/glossary/term/supply_chain.
4    Thanks to Sara Ann Brackett for discussion of her forthcoming paper.
5    Aspen K. Hopkins et al., “Recourse, Repair, Reparation, and Prevention: A Stakeholder Analysis of AI Supply Chains,” arXiv, July 3, 2025 [submit date], https://doi.org/10.48550/arXiv.2507.02648.
6    “NSA’s AISC Releases Joint Guidance on the Risks and Best Practices in AI Data Security,” US National Security Agency: Central Security Service, press release, May 22, 2025, https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/4192332/nsas-aisc-releases-joint-guidance-on-the-risks-and-best-practices-in-ai-data-se/.
7    A single company, for instance, might find that maintaining better documentation for AI applications and data allows it to not just address vulnerabilities in those systems, but also the accidental failures, human errors, and bureaucratic issues (like the purchasing of new systems), too.
8    Kai-Fu Lee, an AI industry leader and author who has advanced this “might” view about data quantity, received wide acceptance in policy, industry, and media circles for his book, AI Super-Powers, first published in 2018. See: Andy Bast, Interview: “China’s Greatest Natural Resource May Be Its Data,” 60 Minutes, CBS News, July 14, 2019, https://www.cbsnews.com/news/60-minutes-ai-chinas-greatest-natural-resource-may-be-its-data-2019-07-14/; Michael Chiu, Interview: “Kai-Fu Lee’s Perspectives on Two Global Leaders in Artificial Intelligence: China and the United States,” McKinsey Global Institute, June 14, 2018, https://www.mckinsey.com/featured-insights/artificial-intelligence/kai-fu-lees-perspectives-on-two-global-leaders-in-artificial-intelligence-china-and-the-united-states
9    Nicholas Thompson, Interview (Michael Kratsios): “The Case for a Light Hand With AI and a Hard Line on China,” WIRED, January 14, 2020, https://www.wired.com/story/light-hand-ai-hard-line-china/; Justin Sherman, “Don’t be Fooled by Big Tech’s Anti-China Sideshow,” WIRED, July 30, 2020, https://www.wired.com/story/opinion-dont-be-fooled-by-big-techs-anti-china-sideshow/.
10    “NTIA Solicits Comments on Open-Weight AI Models,” US Department of Commerce, press release, February 21, 2024, https://www.commerce.gov/news/press-releases/2024/02/ntia-solicits-comments-open-weight-ai-models.
11    90 FR 4544 (2025).
12    Dan Milmo et al., “‘Sputnik Moment’: $1tn Wiped off US Stocks after Chinese Firm Unveils AI Chatbot,” The Guardian, January 27, 2025, https://www.theguardian.com/business/2025/jan/27/tech-shares-asia-europe-fall-china-ai-deepseek. Notably, the reaction described, of course, could have had more nuance in articulating the ways that a US company’s research or advancement might benefit all kinds of companies, including other ones in the United States.
13    See: Milad Alucozai, Will Fondrie, and Megan Sperry, “From Data to Drugs: The Role of Artificial Intelligence in Drug Discovery,” Wyss Institute, January 9, 2025, https://wyss.harvard.edu/news/from-data-to-drugs-the-role-of-artificial-intelligence-in-drug-discovery/; Chen Fu and Qiuchen Chen, “The Future of Pharmaceuticals: Artificial Intelligence in Drug Discovery and Development,” Journal of Pharmaceutical Analysis 15, no. 8 (August 2025), https://doi.org/10.1016/j.jpha.2025.101248.
14    See: Sella Nevo et al., Securing AI Model Weights: Preventing Theft and Misuse of Frontier ModelsRAND, May 2024, https://www.rand.org/pubs/research_reports/RRA2849-1.html; Janet Egan, Paul Scharre, and Vivek Chilukuri, Promote and Protect America’s AI AdvantageCenter for a New American Security, January 20, 2025, https://www.cnas.org/publications/commentary/promote-and-protect-americas-ai-advantage; Alan Z. Rozenshtein, “There Is No General First Amendment Right to Distribute Machine-Learning Model Weights,” Lawfare, April 4, 2024, https://www.lawfaremedia.org/article/there-is-no-general-first-amendment-right-to-distribute-machine-learning-model-weights; Raffaele Huang, Stu Woo, and Asa Fitch, “Everyone’s Rattled by the Rise of DeepSeek—Except Nvidia, Which Enabled It,” Wall Street Journal, February 2, 2025, https://www.wsj.com/tech/ai/nvidia-jensen-huang-ai-china-deepseek-51217c40
15    Matt Sheehan, “Much Ado About Data: How America and China Stack Up,” Paulson Institute: MacroPolo, July 16, 2019, https://archivemacropolo.org/ai-data-us-china/?rp=e.
16    Joshua New, “Why Do People Still Think Data Is the New Oil?”, Center for Data Innovation, January 16, 2018, https://datainnovation.org/2018/01/why-do-people-still-think-data-is-the-new-oil/.
17    Claudia Wilson and Emmie Hine, “Export Controls on Open-Source Models Will Not Win the AI Race,” Just Security, February 25, 2025, https://www.justsecurity.org/108144/blanket-bans-software-exports-not-solution-ai-arms-race/.
18    Kenton Thibaut, “What DeepSeek’s Breakthrough Says (and Doesn’t Say) About the ‘AI race’ with China,” New Atlanticist (blog), January 28, 2025, https://www.atlanticcouncil.org/blogs/new-atlanticist/what-deepseeks-breakthrough-says-and-doesnt-say-about-the-ai-race-with-china/
19    See, for instance, among the many recent articles and headlines: Jason Ross Arnold, “High-Risk AI Models Need Military-Grade Security,” War on the Rocks, August 6, 2025, https://warontherocks.com/2025/08/high-risk-ai-models-need-military-grade-security/; Ryan Lovelace, “Congress Digs into China’s Alleged Theft of America’s AI Secrets,” Washington Times, May 7, 2025, https://www.washingtontimes.com/news/2025/may/7/congress-digs-chinas-alleged-theft-americas-ai-secrets/
20    The frantic coverage of every new AI development by many media outlets does little to help resolve the data challenges surrounding AI R&D.
21    See the discussion by authors at the Belfer Center about whether “the United States has essentially conceded the [AI] race [with China] because of concerns over the average individual’s privacy”: Graham Allison and Eric Schmidt, Is China Beating the U.S. to AI Supremacy? (Cambridge: Harvard Kennedy School Belfer Center, August 2020), https://www.belfercenter.org/publication/china-beating-us-ai-supremacy
22    Nitasha Tiku, “Big Tech: Breaking Us Up Will Only Help China,” WIRED, May 23, 2019, https://www.wired.com/story/big-tech-breaking-will-only-help-china/; Josh Constine, “Facebook’s Regulation Dodge: Let Us, or China Will,” TechCrunch, July 17, 2019, https://techcrunch.com/2019/07/17/facebook-or-china/.
23    While there are many differences between the US and Chinese environments vis-à-vis data, these notions are not entirely true. See: Samm Sacks and Lorand Laskai, “China’s Privacy Conundrum,” Slate, February 7, 2019, https://slate.com/technology/2019/02/china-consumer-data-protection-privacy-surveillance.html; Sam Bresnick, “The Obstacles to China’s AI Power,” Foreign Affairs, December 31, 2024, https://www.foreignaffairs.com/china/obstacles-china-ai-military-power.
24    Jessie Yeung, “China’s Sitting on a Goldmine of Genetic Data – and It Doesn’t Want to Share,” CNN, August 12, 2023, https://www.cnn.com/2023/08/11/china/china-human-genetic-resources-regulations-intl-hnk-dst.
25    See: Beatriz Botero Arcila, AI Liability Along the Value Chain (San Francisco: Mozilla Foundation, April 2025), https://blog.mozilla.org/netpolicy/files/2025/03/AI-Liability-Along-the-Value-Chain_Beatriz-Arcila.pdf; Max von Thun and Daniel A. Hanley, Stopping Big Tech from Becoming Big AI (San Francisco: Mozilla Foundation, October 2024), https://blog.mozilla.org/wp-content/blogs.dir/278/files/2024/10/Stopping-Big-Tech-from-Becoming-Big-AI.pdf; SPEAR Invest, “Diving Deep into the AI Value Chain,” NASDAQ, December 18, 2023, https://www.nasdaq.com/articles/diving-deep-into-the-ai-value-chain. See also: “The Value Chain,” Harvard Business School: Institute for Strategy and Competitiveness, accessed August 26, 2025, https://www.isc.hbs.edu/strategy/business-strategy/Pages/the-value-chain.aspx
26    While recognizing the necessity of evaluating AI in relation to the social, political, and economic systems that researchers, companies, and others operate within and use to build AI technologies—such as exploitative labor systems and the environmental system—this report focuses, for scope- and length-limitation purposes, on a typology of the digital and data elements themselves of relevance for AI R&D. For essential reading on other systems that generate data, move data into AI systems, and much more, see: Tamara Kneese, Climate Justice and Labor Rights: Part I: AI Supply Chains and Workflows (New York: AI Now Institute, August 2023), https://ainowinstitute.org/general/climate-justice-and-labor-rights-part-i-ai-supply-chains-and-workflows; Kashmir Hill, Your Face Belongs to Us: A Tale of AI, a Secretive Startup, and the End of Privacy (New York: Penguin Random House, 2023); Billy Perrigo, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” TIME, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/; Adrienne Williams, Milagros Miceli, and Timnit Gebru, “The Exploited Labor Behind Artificial Intelligence,” Noema Magazine, Berggruen Institute, October 13, 2022, https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/.
27    Qiang Hu et al., “Large Language Model Supply Chain: Open Problems from the Security Perspective,” arXiv, November 3, 2024, https://arxiv.org/abs/2411.01604 (see, in particular, page 2’s LLM supply chain map). 
28     “Registry of Open Data on AWS,” Amazon Web Services (AWS), accessed June 16, 2025, https://registry.opendata.aws
29    See: Building AI for India! (website), accessed June 16, 2025, https://ai4bharat.iitm.ac.in; Tsinghua University: Institute for Artificial Intelligence Foundation Model Research Center (website), accessed June 16, 2025, https://fm.ai.tsinghua.edu.cn.
30    Shayne Longpre et al., “In-House Evaluation Is Not Enough: Towards Robust Third-Party Disclosure for General-Purpose AI,” arxiv.org, March 25, 2025, https://arxiv.org/abs/2503.16861 (an important point the authors make is to call the idea that general-purpose AI systems “are unique from existing software and require special disclosure rules” a “misconception”). 
31    For example, see more on how homomorphic encryption can be used to encrypt data, including AI training data, while still enabling computation on it: “Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem,” Machine Learning Research, Apple, October 24, 2024, https://machinelearning.apple.com/research/homomorphic-encryption.
32    SP 800-53 Rev. 5.1.1, SC-28: “Protection of Information at Rest,” US National Institute of Standards and Technology, accessed June 27, 2025, https://csrc.nist.gov/projects/cprt/catalog#/cprt/framework/version/SP_800_53_5_1_1/home?element=SC-28.
33    SP 800-53 Rev. 5.1.1, SC-08: “Transmission Confidentiality and Integrity,” US National Institute of Standards and Technology, accessed June 27, 2025, https://csrc.nist.gov/projects/cprt/catalog#/cprt/framework/version/SP_800_53_5_1_1/home?element=SC-08
34    “The NIST Cybersecurity Framework (CSF) 2.0,” US National Institute of Standards and Technology, February 26, 2024, https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf
35    “PR.DS-10: The Confidentiality, Integrity, and Availability of data-in-Use Are Protected,” CSF Tools, accessed June 27, 2025, https://csf.tools/reference/nist-cybersecurity-framework/v2-0/pr/pr-ds/pr-ds-10/.
36    See: MJ Raber, “SOC 2 Controls: Encryption of Data at Rest – An Updated Guide,” Security Boulevard, Techstrong Group, December 6, 2022, https://securityboulevard.com/2022/12/soc-2-controls-encryption-of-data-at-rest-an-updated-guide/.
37    Anthropic, for example, talks about using more than 100 different security controls to protect model weights. See: “Activating AI Safety Level 3 Protections,” Anthropic, May 22, 2025, https://www.anthropic.com/news/activating-asl3-protections
38    For example, a bad actor could generate training examples with incorrect or altered labels with the express purpose of causing someone to unintentionally train a harmful or erroneous model. Apostol Vassilev et al., NIST Trustworthy and Responsible AI – NIST AI 100-2e2025Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, US National Institute of Standards and Technology (March 2025): 20, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2025.pdf.
39    Vassilev et al., NIST Trustworthy and Responsible AI , 20–27.
40    “Framework for Assessing Risks,” US Office of the Director of National Intelligence (ODNI), April 2021, https://www.dni.gov/files/NCSC/documents/supplychain/Framework_for_Assessing_Risks_-_FINAL_Doc.pdf
41    As the noted in the cited ODNI publication, “Key to this is using the latest threat information to determine if specific and credible evidence suggests an item or service might be targeted by adversaries. But, it must be noted, that while adversaries wish to do harm, they can only be successful if systems, processes, services, etc. are vulnerable to attacks.”
42    These are examples provided by the author of how weaknesses can be “inherent” to a system in this context; they are not examples listed in the cited ODNI publication.
43    US Office of the Director of National Intelligence, “Framework for Assessing Risks.”
44    See: ISO/IEC 27001:2022, “Information Security, Cybersecurity and Privacy Protection — Information Security Management Systems — Requirements,” International Organization for Standardization, 2022, https://www.iso.org/standard/27001.
45    See also some of the security controls and risk mitigations laid out in: Sella Nevo et al., Securing AI Model Weights.
46    See: Hossein Souri et al., “Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch,” arXiv, June 16, 2021, https://arxiv.org/abs/2106.08970.
47    See: Wei Guo, Benedetta Tondi, and Mauro Barni, “An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences,” arXiv, November 16, 2021, https://arxiv.org/abs/2111.08429.
48    
49    
50    Anqing Zhang et al., “Defending Against Backdoor Attack on Deep Neural Networks Based on Multi-Scale Inactivation,” Information Sciences 690, no. 121562 (February 2025), https://www.sciencedirect.com/science/article/abs/pii/S0020025524014762
51    Harry Langford et al., “Architectural Neural Backdoors from First Principles,” arXiv, February 10, 2024, https://arxiv.org/abs/2402.06957.
52    See: Article 32, GDPR (General Data Protection Regulation): Security of Processing, Intersoft Consulting, accessed September 4, 2025, https://gdpr-info.eu/art-32-gdpr/.
53    See: Gus Tomlinson, “KYC Process: Ask the Right Questions,” GBG, accessed June 27, 2025, https://www.gbg.com/en/blog/kyc-process-ask-the-right-questions/.
54    See: “Guidance on End-User and End-Use Controls and U.S. Person Controls,” US Department of Commerce, Bureau of Industry and Security, accessed June 27, 2025, https://www.bis.gov/licensing/guidance-on-end-user-and-end-use-controls-and-us-person-controls#“KnowYourCustomerGuidance”andRedFlags.
55    See: “Best Practices in Cyber Supply Chain Risk Management,” US National Institute of Standards and Technology, accessed June 27, 2025, https://csrc.nist.gov/CSRC/media/Projects/Supply-Chain-Risk-Management/documents/briefings/Workshop-Brief-on-Cyber-SCRM-Vendor-Selection-and-Management.pdf; “Third-Party Cyber Risks Impact All Organizations,” Marsh, April 22, 2025, https://www.marsh.com/en/services/cyber-risk/insights/defining-uncovering-cyber-risks-digital-supply-chain.html.
56    Shaked Rotlevi, “AI-BOM: Building an AI Bill of Materials,” WIZ, July 20, 2025, https://www.wiz.io/academy/ai-bom-ai-bill-of-materials.
57    See: Andy Greenberg and Matt Burgess, “The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind,” WIRED, April 3, 2024, https://www.wired.com/story/jia-tan-xz-backdoor/.
58    In addition to the above discussion of poisoning, see: Nicholas Carlini et al., “Poisoning Web-Scale Training Datasets is Practical,” arXiv, February 20, 2023, https://arxiv.org/abs/2302.10149.

The post Securing data in the AI supply chain   appeared first on Atlantic Council.

]]>
Daniels discusses China’s AI strategy on the China Power Podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/daniels-discusses-chinas-ai-strategy-on-the-china-power-podcast/ Fri, 22 Aug 2025 16:15:23 +0000 https://www.atlanticcouncil.org/?p=869268 On August 19, Forward Defense nonresident senior fellow Owen Daniels was featured on Episode 108 of the German Marshal Fund's China Power Podcast.

The post Daniels discusses China’s AI strategy on the China Power Podcast appeared first on Atlantic Council.

]]>

On August 19, Forward Defense nonresident senior fellow Owen Daniels was featured on Episode 108 of the German Marshal Fund’s China Power Podcast. In the episode, Owens discusses US-China competition in artificial intelligence, China’s AI strategy and ambitions, and how Beijing is leveraging AI to expand its global influence. Daniels also explores what the United States can do to maintain and bolster its technological leadership.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Daniels discusses China’s AI strategy on the China Power Podcast appeared first on Atlantic Council.

]]>
Daniels examines China’s AI soft power strategy in War on the Rocks https://www.atlanticcouncil.org/insight-impact/in-the-news/daniels-examines-chinas-ai-strategy-in-war-on-the-rocks/ Fri, 22 Aug 2025 16:09:33 +0000 https://www.atlanticcouncil.org/?p=868944 On August 14, Forward Defense nonresident senior fellow Owen Daniels published an article, "China’s Soft Power Tools and Whether They Work," in War on the Rocks.

The post Daniels examines China’s AI soft power strategy in War on the Rocks appeared first on Atlantic Council.

]]>

On August 14, Forward Defense nonresident senior fellow Owen Daniels published an article, “China’s Soft Power Tools and Whether They Work,” in War on the Rocks. In the article, Daniels examines China’s strategic use of open AI platforms as a tool of soft power, highlighting how models like DeepSeek’s R1 and Moonshot AI’s Kimi K2 pose both technological and diplomatic challenges for the United States.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Daniels examines China’s AI soft power strategy in War on the Rocks appeared first on Atlantic Council.

]]>
How the Chip Security Act could usher in an era of ‘trusted trade’ with US partners https://www.atlanticcouncil.org/blogs/geotech-cues/how-the-chip-security-act-could-usher-in-an-era-of-trusted-trade-with-us-partners/ Mon, 18 Aug 2025 17:06:07 +0000 https://www.atlanticcouncil.org/?p=868035 If implemented effectively, a bipartisan bill could help counter the illicit transshipment and diversion of artificial intelligence chips to US adversaries.

The post How the Chip Security Act could usher in an era of ‘trusted trade’ with US partners appeared first on Atlantic Council.

]]>
As the global race for artificial intelligence (AI) supremacy accelerates, the world’s reliance on one foundational technology—advanced semiconductors—has exposed an increasingly dangerous vulnerability. AI chips, particularly high-performance graphics processing units (GPUs) designed in the United States, are powering breakthroughs in national security, scientific research, and economic growth. These chips are inherently dual-use, meaning that governments and companies can use them to support both civilian and military applications. But these chips are also being diverted at scale into the hands of US adversaries, including China, despite export controls designed to prevent precisely that outcome. It is time to implement a “trusted trade” program to address this growing threat.

The Chip Security Act (CSA), a bipartisan bill introduced in May as H.R.3447 in the House of Representatives and S.1705 in the Senate, seeks to do just that. The CSA would provide a low-burden, high-impact policy solution to a problem that current enforcement tools have failed to solve: the illicit transshipment and diversion of AI chips to adversarial nations. In doing so, the CSA would also support the objectives of the White House’s AI Action Plan and the broader commercial goals of growing US full-stack AI market share overseas while simultaneously securing the semiconductor supply chain.

A growing and persistent threat

AI chips are the cornerstone of economic and military power in the twenty-first century. From autonomous weapons and cyber defense platforms to next-generation surveillance systems, these chips power the technological edge of modern warfare. China is actively working to erode this advantage by circumventing US export controls through elaborate smuggling networks. In 2024 alone, an estimated 140,000 high-performance GPUs—billions of dollars’ worth—were smuggled into China. And the problem is only getting worse—in May, 50 percent of the GPUs shipped to Malaysia were ultimately transshipped to China.

Current enforcement tools have clearly proven inadequate, largely due to the antiquated nature of the export control enforcement regime. The US Department of Commerce’s Bureau of Industry and Security (BIS), the federal agency responsible for enforcing chip export controls, remains under-resourced and lacks the cutting-edge tools required to counter China’s transshipment and illicit evasion networks. The BIS’s limited in-region capacity—there is only a single export-control officer assigned to cover all of Southeast Asia—stands in stark contrast to the billion-dollar smuggling operations it seeks to monitor and disrupt.

Why the Chip Security Act is different

The CSA doesn’t attempt to surge BIS personnel to Southeast Asia or rebuild the export control system from the ground up. Instead, it would introduce a surgical fix: location verification for chips exported abroad. If a chip shows up outside its authorized destination, the exporter would be required to notify the BIS. This automation would enable global, scalable export control enforcement—helping to augment and partially replace archaic Cold War-era end-use inspection systems that have historically inspected less than 1 percent of goods for end-use violations.

It is important to recognize that location data alone cannot fully determine end use. This is especially the case for dual-use goods such as advanced chips, where the true strategic risk lies in the computational and analytical capabilities they enable. Therefore, location information must also be paired with complementary intelligence and continuous monitoring. Other critical factors to determine the potential for harmful end use include corporate ownership, the composition of senior leadership, operational behavior, known diversion patterns, and the strength of cloud access controls at the data centers in question. These additional factors are especially vital to ensure that chips approved for export to third countries such as Malaysia do not ultimately end up in data centers owned by adversarial entities operating within those jurisdictions.

Crucially, though, the CSA is not overly prescriptive on how to achieve these objectives. For example, the CSA avoids mandating a specific location verification technology. Instead, it allows for industry-led implementation using already available methods, such as firmware-based geolocation checks or Delay-Based Location Verification systems. These tools offer privacy-preserving and nonintrusive methods to confirm a chip’s location without monitoring user activity or enabling surveillance. It’s an approach that balances security with commercial viability—and it’s deployable today.

Learning from past failures

The CSA builds on the hard lessons of previous legislative and regulatory efforts. Export controls under the Export Control Reform Act and the CHIPS and Science Act have failed to address transshipment and empower the BIS with the resources and technology needed to enforce its global mission. Moreover, past proposals to hardwire chips with geofencing capabilities or mandate “kill switches” faced strong industry resistance due to concerns over cost, reliability, and the potential effect on international sales. These ideas either stalled or were watered down beyond usefulness. In contrast, the CSA reflects feedback from both national security experts and chipmakers, emphasizing modularity, cost-effectiveness, and scalability.

Expanding US and Western AI dominance through trusted trade

The misuse of advanced chips is not just a US concern—it poses a global threat. When semiconductors are diverted to unauthorized end users, they can fuel authoritarian control, destabilize markets, and erode allied innovation ecosystems. Recognizing this, key US allies such as Japan and the Netherlands have taken steps to restrict high-end semiconductor and equipment exports to China. Yet enforcement across jurisdictions remains porous due to a lack of verifiable, end-to-end visibility. The CSA methodology can address this gap in the allied export control system by enabling continuous, software-based location verification and lifecycle tracking—empowering companies to demonstrate responsible export behavior without sacrificing speed, scale, or profitability.

The CSA provides an alternative to the false binary often facing policymakers: either impose broad restrictions that hinder legitimate commerce or tolerate unchecked smuggling that threatens national security. Instead, the CSA would deliver precision enforcement, targeting violators without penalizing compliant firms. By embedding continuous supply chain visibility into the post-export phase, it would equip both industry and regulators to detect and address illicit diversions before they escalate. This would strengthen US leadership in AI and semiconductor innovation while giving allies a replicable model.

The CSA’s technology-neutral compliance framework opens the door to future coordination under multilateral tech alliances—ensuring chips produced across democratic nations are not funneled into hostile hands. The United States and its allies could help realize this vision by offering technical assistance to help partners and transshipment-prone countries establish robust export control and supply chain surveillance systems. Such capacity-building would close enforcement gaps and ensure harmonization across allied secure trade frameworks. These measures would help lay the groundwork for a coalition of democracies to secure critical technologies and expand Western AI dominance without stifling innovation.

A better path forward

The CSA has steadily gained bipartisan traction, signaling rare alignment in Congress on a forward-looking export control strategy. But like most major policy shifts, it still faces predictable obstacles on the path to implementation. For the CSA to be effectively implemented, Congress and the executive branch should ensure that the BIS has the additional resources, staff, and technologies needed to monitor and implement the trusted trade program. Once location verification systems are deployed, for example, the BIS will be required to continuously monitor the millions of data points the system collects from chips around the world each day. Dedicated staff will be needed to respond to suspicious activities and engage with industry and foreign governments when questions arise.

In addition to the CSA, US supply chain security could be further strengthened if the executive branch requires the BIS to monitor which foreign companies access the cloud and compute capabilities associated with geotagged chips through screening and end-use checks. This policy would buttress the CSA and enable the BIS to finally field a cost-effective, scalable export control enforcement regime for AI that is not cumbersome for industry.

On the business side, industry groups remain wary of potential regulatory overreach in the law’s implementation phase. A public-private partnership with a trusted third party (meaning BIS, a semiconductor company, and a third party that would serve as a “monitor”) could help resolve conflicts and build mutual trust between industry and the government. Public pressure is growing on US companies to cut ties with China, and this type of trusted trade monitorship under the auspices of a public-private program would be a welcome step toward an economic policy and national security consensus.

A call to action

The global contest for AI leadership is not just a race for innovation—it’s a race for control over the infrastructure that will shape economies, militaries, and governance systems worldwide. If passed and implemented effectively, the CSA would strengthen the United States’ position in that contest by enabling US companies to scale exports of advanced chips without losing visibility or control over where those chips end up. Rather than ceding ground to Chinese firms through uncontrolled diversion and black-market transshipment, the CSA would equip US industry to lead. Passing this bill would send a clear signal that the United States is committed to winning the global AI competition not just by building the most advanced technology—but also by ensuring that technology is deployed on terms that reflect US interests and values.


Kit Conklin is a nonresident senior fellow at the Atlantic Council’s GeoTech Center and the senior vice president for risk and compliance at Exiger.

The views expressed are solely those of the author and do not necessarily reflect the views of the Atlantic Council or Exiger.

Further reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post How the Chip Security Act could usher in an era of ‘trusted trade’ with US partners appeared first on Atlantic Council.

]]>
A marketplace for mission-ready AI: Accelerating capability delivery to the Pentagon https://www.atlanticcouncil.org/content-series/strategic-insights-memos/a-marketplace-for-mission-ready-ai-accelerating-capability-delivery-to-the-pentagon/ Thu, 14 Aug 2025 14:30:56 +0000 https://www.atlanticcouncil.org/?p=867366 The Department of Defense’s traditional AI procurement often delivers models that quickly become outdated. This memo proposes creating a performance-driven AI model marketplace—where vendors train models on a shared “data lake” and are paid only for real-world usage—ensuring faster delivery, continuous innovation, and mission-ready capabilities at scale.

The post A marketplace for mission-ready AI: Accelerating capability delivery to the Pentagon appeared first on Atlantic Council.

]]>
TO: The secretary of defense

FROM: Jack Long, Bharat C. Patel, and Jags Kandasamy

DATE: August 14, 2025

SUBJECT: Proposing a performance-based AI model marketplace for the Department of Defense

  • Jack Long, PhD, is a lieutenant colonel in the US Marine Corps Reserve and the Naval AI lead at the Office of Naval Research.
  • Bharat C. Patel is product lead, Project Linchpin, at the US Army Program Executive Office–Intelligence, Electronic Warfare, and Sensors.
  • Jags Kandasamy is co-founder and chief executive officer of Latent AI, a start-up offering scalable, secure edge AI solutions for battlefield and industrial environments. He is also a distinguished fellow at the University of South Florida’s Global and National Security Institute.

The Department of Defense (DOD) should accelerate the deployment of operational artificial intelligence (AI) by establishing a performance-driven AI model marketplace. This strategic insights memo outlines a framework for one such marketplace. The approach incentivizes innovation through open competition, rapid iteration, and real-world performance validation—delivering mission-ready AI solutions at speed and scale. This proposal follows the principles outlined by the Atlantic Council’s Software-Defined Warfare Commission.

Currently, the government purchases AI models from a variety of industry partners. These models need regular retraining and optimization to be relevant in the deployed scenarios. There is a better way to meet the Defense Department’s need for AI.

In this proposal, the government would make a “data lake” available for industry partners to use to train models. This data lake could consist of imagery, radio frequency, sonar, and other mission-relevant datasets. Vendors could independently train models on this data lake and submit them to a centralized government model catalog—the Open Model Marketplace—where they would be made available for discovery and deployment by DOD components across services and commands.

Unlike the traditional procurement of AI through upfront contracts, this approach would compensate vendors based solely on model usage. The government would pay only for performance and the vendors would make money each time their models are deployed in operations. Models that demonstrate real-world utility, responsiveness to mission context, efficient compute utilization, and value to the user and value to the user would naturally rise to the top.

Step one: The Pentagon sets up a government-furnished data playground.

  • DOD will establish and operate a secure data playground in which industry partners can work with US government data at various classification levels.
  • DOD will provide vetted industry partners access to datasets and a data catalog.
  • The playground will provide secure infrastructure; vendors are expected to support the infrastructure by paying to use it.

Step two: Vendors develop models and DOD vets them.

  • DOD will provide known requirements, but vendors would be free to develop models for any use case they consider relevant.
  • Models will be assessed via common and standardized metrics.
  • Models will be vetted for relevance, performance, security, interoperability, and ethical considerations.
  • Models will undergo basic validation and be scored before gaining approval for inclusion in the catalog of the Open Model Marketplace.

Step three: DOD units use the Open Model Marketplace.

  • The Open Model Marketplace will be a centralized catalog containing all approved models categorized by mission, type, accuracy, and resource footprint.
  • Government customers could perform additional testing on models to assess relevance.
  • Any DOD unit could select and deploy models that meet its mission needs.
  • Users could run models on the compute infrastructure of their choice.
  • Models could be selected individually or as part of a “model pack” based on pricing offered by vendors.

Step four: Vendors are paid using a pay-for-performance model.

  • Vendors will be compensated based on model consumption—with no upfront funding or long-term exclusivity.
  • Model pricing will be independent of compute costs; models will run on government hardware.
  • The model can be used on a monthly basis, with the option to terminate at end of each month.
  • Vendors willing to assume risk could move quickly to build solutions.
  • By avoiding long-term contracts or vendor lock-in, DOD could maintain flexibility.
  • Innovation cycles would be shortened as vendors continuously iterate to remain competitive.

Step five: Users and customers score and give feedback.

  • DOD can concurrently run multiple models that address the same problem.
  • Government units will provide structured feedback and scoring on model performance.
  • Users will send feedback to both the vendor and a DOD Test, Evaluation, Validation and Verification oversight team.
  • Model performance statistics will be included in a model card and visible in a model catalog.
  • Poorly performing models will be flagged, while high performers will be rewarded with increased usage and visibility.

Step six: Contracting pathways for acquisition.

  • The DOD can leverage different contracting mechanisms to enable both rapid onboarding of new models and scalable deployment of proven ones.
  • For newer models, the Commercial Solutions Opening authority is the best option to quickly prototype and validate capabilities that are tied to specific operational needs.
  • For proven models, DOD should establish a multiple-award Blanket Purchase Agreement under Federal Acquisition Regulation 13.5 or 16.703 to pre-qualify and establish standardized terms (security, intellectual property, telemetry, runtime), enabling rapid call orders for repeat or scaled use.
  • This approach ensures the marketplace serves as both an on-ramp for emerging capabilities and a fast lane for repeat procurement.

An open-model marketplace would offer several benefits to servicemembers.

  • Innovation: Access to data would make it possible for the vendor base to iterate and develop faster.
  • Speed: DOD would have immediate access to cutting-edge models without procurement delays.
  • Performance: Only the most effective models would be likely to survive based on real-world success.
  • Flexibility: DOD operators could tailor model selection to their unique operating environments.
  • Cost-efficiency: DOD would only spend taxpayer dollars on solutions that deliver value, avoiding sunk costs.

To handle adoption and implementation, the Pentagon’s Chief Digital and AI Office (CDAO) should develop a franchise strategy in which it sets the standards but others (e.g., services, combatant commands, and others) can set up their own operations. CDAO should define onboarding policies, model-intake standards, test and evaluation criteria, and other high-level rules, but should let the franchisees execute. This approach would ensure department-wide interoperability while allowing fast movers to drive ahead.

Acknowledgments

Latent AI is a financial supporter of the Atlantic Council’s Scowcroft Center for Strategy and Security’s Software-Defined Warfare Commission. Kandasamy is an industry member of the commission.

The views expressed are the authors’ own and do not necessarily reflect those of their employer, the US government, or any affiliated organization.

Explore the program

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post A marketplace for mission-ready AI: Accelerating capability delivery to the Pentagon appeared first on Atlantic Council.

]]>
Reading between the lines of the dueling US and Chinese AI action plans https://www.atlanticcouncil.org/blogs/new-atlanticist/reading-between-the-lines-of-the-dueling-us-and-chinese-ai-action-plans/ Thu, 07 Aug 2025 17:40:49 +0000 https://www.atlanticcouncil.org/?p=865892 Washington and Beijing recently released plans for advancing artificial intelligence. Atlantic Council experts answer six big questions about the two publications.

The post Reading between the lines of the dueling US and Chinese AI action plans appeared first on Atlantic Council.

]]>
Action speaks louder than words—but words are a good place to start. On July 23, the Trump administration released ”Winning the AI Race: America’s AI Action Plan.” Three days later, China unveiled its “Global AI Governance Action Plan.” Both superpowers are in a contest to acquire the best technology and establish the rules of the road for artificial intelligence (AI), and their decisions will have a major impact on the global AI ecosystem. To figure out what these dueling plans mean for both countries and the wider world, we reached out to our top tech minds for their take on six burning questions.

Though differing in scope and intent, the AI action plans released by the United States and China targeted the same global audience and provided revealing indicators of how each country aims to define global leadership amid rapid technological change. 

The US AI Action Plan is broad in scope, ranging from domestic industrial capacity to promoting US technology abroad. Preceded by President Donald Trump’s visit to Pittsburgh where he touted investments in AI infrastructure, the rollout of the action plan included primarily US industry leaders and policymakers. The plan has three pillars, including accelerating AI innovation, building AI infrastructure in the United States, and international diplomacy focused on US exports, standards, and security. 

China’s plan, announced at the World AI Conference in Shanghai, is more narrowly focused on international governance, standards, and norms. Speaking to an international conference on July 26 in Shanghai, Chinese Premier Li Qiang announced thirteen elements of China’s approach to “multilateral and bilateral cooperation.” While the language espouses collaboration, China’s global approach is to ultimately replace the current rules-based, multistakeholder international order with an alternative centered on state control, increasingly through technology.

The proximity of the announcements is no coincidence. But the United States and China are far from the only countries shaping the age of AI.

Graham Brookie is the Atlantic Council’s vice president for technology programs and strategy.

China and the United States are advancing fundamentally different visions of AI’s role in the world. For China, AI is geopolitical infrastructure—centralized, sovereign, and aligned with its Belt and Road–style diplomacy. It emphasizes sovereign compute power, data control, and state-led development. The United States, by contrast, sees AI as an economic engine and a pillar of national security, anchored in open innovation, private enterprise, and alliances among democracies. 

This divide cuts deeper than policy. China champions state control using the term “sovereignty,” while engaging in multilateral fora and United Nations (UN)–led mechanisms to bend the rules-based international order toward its state control model and steer global AI governance, positioning itself as a voice for the Global South. The United States clings to industry and a market-driven agenda, promoting “trusted” governance through broad Organisation for Economic Co-operation and Development (OECD) principles and the narrower Group of Seven (G7) partnership—while tightening export controls and tech restrictions. While China leverages AI diplomacy through infrastructure, training, and open-source tools, the United States doubles down on norms and safeguards. 

This is about more than just technology—it’s a battle between rival digital worldviews. One champions state control and sovereign authority; the other bets on openness, markets, and liberal norms. As the Global South navigates between these competing visions, the outcome may not be a clean split—but a fractured landscape is inevitable. The coming decade will hinge on which model proves more persuasive, more scalable, and more aligned with global aspirations.

Konstantinos Komaitis is a resident senior fellow and global governance and technology lead at the Atlantic Council’s Democracy and Tech Initiative.

The United States and China are offering starkly contrasting visions for global AI governance. The US plan emphasizes strategic competition with China and prioritizes maintaining US technological primacy. It advances an ambitious—and, some would say, aggressive—framework: bolstering enforcement of existing chip export controls, pressuring allies to align with US restrictions under threat of penalty, and generally seeking to create dependencies on US technology products, including by providing an all-in-one AI tech stack as part of its foreign policy strategy. When partners are mentioned, it is with a somewhat antagonistic tone—they are described largely as either potential customers or as obstacles to be brought into compliance. The plan outlines the need to counter China in international governance bodies—which is a good and welcome inclusion—but with multiple State Department agencies facing deep staffing cuts, including many with deep knowledge of and relationships in these fora, it is difficult to see how the United States will effectively navigate these diplomatically nuanced and complex processes in a way that furthers US interests. 

This haphazard approach stands in stark contrast to the People’s Republic of China’s (PRC’s) engagement on AI governance. The PRC’s announcement builds on a years-long, multi-pronged, interlocking, and self-referential strategy designed to ensure China’s dominance in AI governance. The thirteen-point plan—which emphasizes themes of multilateralism, inclusivity, and deep engagement with Global Majority countries on their terms—builds on the PRC’s quiet and consistent work to build coalitions of Global Majority countries to vote in its favor in the very UN bodies mentioned in the US AI plan, including the Global Digital Compact. Crucially, promotion of China’s tech stack is part of this broader diplomatic engagement strategy. For example, Huawei offers long-standing “AI in a Box” solutions for governments seeking such technologies. These kinds of all-in-one offerings are often paired with financial incentives (for both the Chinese company and the recipient government), as well as follow-up training modules that promote Chinese initiatives and approaches to technology governance. 

The US AI Action Plan appears to be competing with China on the basis of offering a superior all-in-one AI tech stack. It is encouraging that the United States is finally recognizing this cornerstone of China’s strategy. However, Washington is failing to recognize a key source of China’s success—and one that may ultimately determine its own failure: the mutually reinforcing relationship between technology and diplomacy, executed in a systematic, nuanced, and consistent manner. 

Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).

There’s no doubt that Washington and Beijing view each other as rivals, including in the artificial intelligence race. But both sides also share overlapping interests that could—and, at this moment of heightened geopolitical tensions, it remains a “could”—offer a path forward toward cooperation on the inner-workings of this emerging technology where both sides could benefit from some form of détente. 

The United States and China both highlighted the need for all parts of society to benefit from a technology that will likely seep into every part of individuals’ daily lives in the years to come. That may be a small point. But Washington and Beijing both committed to making AI as accessible as possible, including across several non-tech-related industries. That’s a commitment that may open the path for the geopolitical rivals to work on common apolitical standards, as they did in 2023 when signing the voluntary Bletchley Park Declaration around AI safety.

Both sides also made pledges to improve the wholesale datasets that underpin the latest large language models. That won’t involve the United States and China sharing sensitive information between each other. But it could potentially offer ways to build mutually accessible datasets in topics like public health and biotechnology in ways that already exist between the two countries.

Washington and China made separate commitments around energy infrastructure to fast-track AI development. Again, these efforts will be done in silos. But some of this infrastructure, including the upgrading of national energy grids to meet the demands of reams of new data centers, could also benefit from lessons learned—and experiences shared—between the two geopolitical rivals. 

Mark Scott is a senior resident fellow at the Democracy and Tech Initiative.

China’s plan portrays AI as a driving force for economic and social development. Its statement at the World AI Conference (WAIC) emphasized how AI could help to achieve the UN 2030 Agenda for Sustainable Development. It is a strategic play, as shared benefits and sustainable development are particularly crucial on China’s engagement with African countries and the broader Global Majority. 

In contrast, the US plan takes a competitive approach with the goal of market dominance. While this aim of maintaining technological leadership may be legitimate, Global Majority countries could view it as confrontational. These nations may hesitate to increase their technological dependence on a global power perceived as unwilling to consider their development needs. 

China’s plan is in line with its longstanding foreign policy principles, as evidenced in United Nations debates over the years. Together with the group of developing nations known as the G77, China has maintained that global governance mechanisms should prioritize and be more responsive to the diverse needs and developmental stages of countries, particularly those in the developing world. Its WAIC statement emphasizing national sovereignty as a fundamental aspect of AI governance aligns with China’s goals of increasing state control at both the national and international levels. 

China’s WAIC statement also proposes exploring a global mechanism for data sharing and promoting the “orderly and free flow of data” while maintaining security. Ensuring that data flows are “secure” or “safe” is a common theme in China’s data policy, often indicating a desire to keep data under state control. Such provisions may align well with the demands for data governance sovereignty relevant to countries in Africa or the BRICS group of emerging economies. Nonetheless, African and Latin American digital rights organizations are often critical of their countries adopting Chinese-influenced data governance frameworks, since they are aware of the risks of surveillance and digital repression enabled by government control of citizens’ data. 

Overall, China’s strategy could further strengthen the support of G77 countries for a state-centered global AI governance model, ultimately advancing Beijing’s geopolitical interests. The United States must address the needs and desires of Global Majority countries if it aims to export its AI stack to those nations and outcompete China. 

Iria Puyosa is a senior research fellow at the Democracy and Tech Initiative.

The rapid proliferation of AI has led to an “adopt or risk obsolescence” mindset, with many governments, Beijing at the forefront, turning to internal balancing strategies focused on bolstering sovereign capacity to develop and use AI. China’s plan reflects a gradual shift in approach, as Beijing realizes its own growth depends on networked forms of power. 

In contrast, the dominance-based approach of the White House plan underestimates the value the United States has historically derived from its agenda-setting power in international governance. This is not just the power to influence what is discussed in international forums, but also what never gets to the table. Meanwhile, Beijing has proposed setting up a new Shanghai-based World AI Cooperation Organization and cooperative processes for AI development and governance under the UN’s Pact for the Future and Global Digital Compact. 

Beijing, it appears, is taking a page from a US playbook it has long criticized. Internet governance observers will remember that through the 2000s, Brazil, Russia, India, China, and South Africa opposed the Internet Corporation for Assigned Names and Numbers’s (ICANN’s) role in internet governance over concerns of US control. It may not be long before China rolls out an ICANN for AI. And the world may react quite differently. 

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center. 

The post Reading between the lines of the dueling US and Chinese AI action plans appeared first on Atlantic Council.

]]>
Daniels interviewed by BBC on AI Action Plan https://www.atlanticcouncil.org/insight-impact/in-the-news/daniels-interviewed-by-bbc-on-ai-action-plan/ Mon, 04 Aug 2025 20:50:15 +0000 https://www.atlanticcouncil.org/?p=865297 On July 23, Forward Defense nonresident senior fellow Owen Daniels was interviewed by Sumi Somaskanda on BBC News regarding the administration's latest AI Action Plan.

The post Daniels interviewed by BBC on AI Action Plan appeared first on Atlantic Council.

]]>

On July 23, Forward Defense nonresident senior fellow Owen Daniels was interviewed by Sumi Somaskanda on BBC News regarding the administration’s latest AI Action Plan. In the segment, Daniels explains that while the strategy includes familiar elements for researchers, it also introduces promising initiatives such as strengthening the US open model ecosystem, enhancing evaluation and security, and expanding workforce training. He notes that questions remain about how the plan will be implemented and whether it will be adequately resourced.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Daniels interviewed by BBC on AI Action Plan appeared first on Atlantic Council.

]]>
Daniels publishes article on China’s soft power strategy in AI in Foreign Affairs https://www.atlanticcouncil.org/insight-impact/in-the-news/daniels-publishes-article-on-chinas-soft-power-strategy-in-ai-in-foreign-affairs/ Mon, 04 Aug 2025 20:40:53 +0000 https://www.atlanticcouncil.org/?p=865279 On July 25, Forward Defense nonresident senior fellow Owen Daniels published an article, “China’s Overlooked AI Strategy: Beijing Is Using Soft Power to Gain Global Dominance,” in Foreign Affairs.

The post Daniels publishes article on China’s soft power strategy in AI in Foreign Affairs appeared first on Atlantic Council.

]]>

On July 25, Forward Defense nonresident senior fellow Owen Daniels published an article, “China’s Overlooked AI Strategy: Beijing Is Using Soft Power to Gain Global Dominance,” in Foreign Affairs. In the article, Daniels discusses how China is leveraging low-cost, open-source AI models as a soft power tool to expand its global influence, particularly in the developing world, posing a growing challenge to US technological leadership and strategic diplomacy.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Daniels publishes article on China’s soft power strategy in AI in Foreign Affairs appeared first on Atlantic Council.

]]>
Experts react: What Trump’s new AI Action Plan means for tech, energy, the economy, and more  https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react-what-trumps-new-ai-action-plan-means-for-tech-energy-the-economy-and-more/ Wed, 23 Jul 2025 23:20:23 +0000 https://www.atlanticcouncil.org/?p=863029 Our experts unpack how the Trump administration’s AI Action Plan will impact the US tech industry, energy policy, and global AI governance.

The post Experts react: What Trump’s new AI Action Plan means for tech, energy, the economy, and more  appeared first on Atlantic Council.

]]>
“An industrial revolution, an information revolution, and a renaissance—all at once.” That’s how the Trump administration describes artificial intelligence (AI) in its new “AI Action Plan.” Released on Wednesday, the plan calls for cutting regulations to spur AI innovation and adoption, speeding up the buildout of AI data centers, exporting AI “full technology stacks” to US allies and partners, and ridding AI systems of what the White House calls “ideological bias.” How does the plan’s approach to AI policy differ from past US policy? What impacts will it have on the US AI industry and global AI governance? What are the implications for energy and the global economy? Our experts share their human-generated responses to these burning AI questions below.  

Click to jump to an expert analysis:

Graham Brookie: A deliberative and thorough plan—but three questions arise about its implementation

Trey Herr: If the US is in an AI race, where is it going?

Trisha Ray: On international partnerships, the AI Action Plan is all sticks, few carrots

Nitansha Bansal: The plan is a step forward for the AI supply chain

Raul Brens: The US can’t lead the way on AI through dominance alone

Mark Scott: The US and EU see eye-to-eye on AI, up to a point

Ananya Kumar and Nitansha Bansal: The US plan may sound like those of the UK and EU—but the differences are critical

Esteban Ponce de Leon: The plan accelerates the tension between proprietary and open-source models

Joseph Webster: On energy, watch what the plan could do for the grid and batteries


A deliberative and thorough plan—but three questions arise about its implementation

We are in an era of increasing geopolitical competition, increased interdependence, and rapid technological change. No single issue demonstrates the convergence of all three better than AI. The AI Action Plan released today reflects this reality. Throughout the first six months of the Trump administration, officials have run a thorough and deliberative policy process—which White House officials say incorporated more than ten thousand public comments from various stakeholders, especially US industry. The resulting product provides a clear articulation of AI in terms of the tech stack that underpins it and an increasingly vast ecosystem of industry segments, stakeholders, applications, and implications. 

The policy recommendations laid out in the action plan are well-organized and draw connections between scientific, domestic, and international priorities. Despite the rhetoric, there is more continuity than it may appear from the first Trump administration to the Biden administration to this action plan—especially in areas such as increasing investment in infrastructure, hardware fabrication, and outcompeting foreign adversaries in innovation and the human talent that underpins it. The AI Action Plan will continue to scale investment and growth in these areas. The key divergence is in governance and guardrails.  

Three outstanding questions stick out regarding effective implementation of the Action Plan.  

First, in an era of budget and staff cuts across the federal government, will there be enough government expertise and funding to realize much of the ambition of this plan? For example, cutting State Department staff focused on tech diplomacy or global norms could undercut parts of the international strategy. Budget cuts to the National Science Foundation could impact AI priorities from workforce to research and development.  

Second, how will the administration wield consolidated power with frameworks to reward states it views as aligned and cut funding to states it sees as unaligned? 

Third, beyond selling US technology, how will the United States not just compete against Chinese frameworks in global bodies, but also work collaboratively with allies and partners on AI norms? 

Given the pace of change, the United States’ success will be based on continuing to grow the AI ecosystem as a collective whole and for the ecosystem to iterate faster to compete more effectively. 

Graham Brookie is the Atlantic Council’s vice president for technology programs and strategy. 


If the US is in an AI race, where is it going?  

The arms race is a funny concept to apply to AI, and not just because the history of arms races is replete with countries bankrupting themselves trying to keep up with a perceived threat from abroad. The repeated emphasis on an AI “race” is still ambiguous on a crucial point—what are we racing toward?  

Consider this useful insight on arms racing in national security: “Over and over again, a promising new idea proved far more expensive than it first appeared would be the case; yet to halt midstream or refuse to try something new until its feasibility had been thoroughly tested meant handing over technical leadership to someone else.”    

 Was this written about AI? No, this comes from historian William H. McNeill writing about the British-German maritime arms race at the turn of the twentieth century. The United Kingdom and Germany raced to build ever bigger armored Dreadnoughts in an attempt to win naval supremacy based on the theory that the economic survival of seagoing countries would be determined by the ability to win a large, decisive naval battle. Industry played a key role in encouraging the competition and setting the terms of the debate, increasingly disconnected from the needs of national security  

 So, to take things back to the present, what are we racing toward when it comes to AI? The White House’s AI Action Plan hasn’t resolved this question. The plan’s Pillar 1 offers a swath of policy ideas grounded more in AI as a normal technology. Pillar 2 is more narrowly focused on infrastructure but still thin on the details of implementation. Tasking the National Institute of Standards and Technology is a common refrain and some of the previous administration’s policy priorities, such as the CHIPS Act and Secure by Design program have been essentially rebranded and relaunched. Pillar 3 calls for a renewed commitment to countering China in multilateral tech standards forums, a cruel irony as the State Department office responsible for this was just shuttered in wide-ranging layoffs announced earlier this month.  

The national security of the United States and its allies is composed of more than the capability of a single cutting-edge technology. Without knowing where this race is going, it will be hard to say when we’ve won, or if it’s worth what we lose to get there.    

Trey Herr is senior director of the Cyber Statecraft Initiative (CSI), part of the Atlantic Council Technology Programs, and assistant professor of global security and policy at American University’s School of International Service.  


On international partnerships, the AI Action Plan is all sticks, few carrots 

The AI Action Plan’s strongest message is that the United States should meet, not curb, global demand for AI. To achieve this, the plan suggests a novel and ambitious approach: full-stack AI export packages through industry consortia. 

What is the AI stack? Most definitions include five layers: infrastructure, data, development, deployment, and application. Arguably, monitoring and governance is a critical sixth layer. US companies dominate components of different layers (e.g. chips, talent, cloud services, and models). But the United States’ ability to export full-stack AI solutions, the carrot in this scenario, is limited by a rather large stick: its broad export control regime, which includes the Foreign Director Product Rule and Export Administration Regulations. 

Governance remains the layer the United States is weakest on. The AI Action Plan does emphasize countering adversarial influence in international governance bodies, such as the Organisation for Economic Co-operation and Development, the Internet Corporation for Assigned Names and Numbers, the Group of Seven (G7), the Group of Twenty (G20), and the International Telecommunication Union. However, the plan undermines the consensus-based AI governance efforts within these bodies, including an apparent jibe at the G7 Code of Conduct. If it seeks real alignment with allies and partners, the White House must outline an affirmative vision for values-based global AI governance. 

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center, part of the Atlantic Council Technology Programs. 


The plan is a step forward for the AI supply chain

The AI Action Plan’s focus on the full AI stack—from energy infrastructure, data centers, semiconductors, and the talent pipeline to acknowledging associated risks and cybersecurity concerns—is welcome. The plan has adopted an optimistic view of the open source and open weight AI models, and it has built in provisions to create a healthy innovation ecosystem for open source AI models along with strengthening the access to compute—which is another positive policy realization on the part of the administration.

The administration appears to be cognizant that competitiveness in AI will not be achieved solely by domesticating the AI supply chain. Competitiveness in this ecosystem needs to be a multi-pronged strategy of translating domestic AI capabilities into national power faster, more efficiently, more effectively, and more economically than adversaries—driven by faster chips, smarter and more trustworthy models, a more resilient electricity grid, a robust investment infrastructure, and collaboration with allies.

This emphasis on securing the full stack means that the near-term policy will target not just innovation, but the location, sourcing, and trustworthiness of every component in the AI pipeline. The owners and users of AI supply chain components have much to look forward to. The new permitting reform could reshape the location of AI infrastructure; recognition of workforce and talent bottlenecks can lead to renewed focus on skill development and training programs; and emphasis on AI-related vulnerabilities in critical infrastructure could translate into more regular and robust information sharing apparatuses and incident response requirements for private sector executives.

In all, achieving AI competitiveness is an ambitious goal, and the plan sets the government’s agenda straight.

Nitansha Bansal is the assistant director of the Cyber Statecraft Initiative.


The US can’t lead the way on AI through dominance alone

The AI Action Plan makes one thing clear: the United States isn’t just trying to win the AI race—it’s trying to engineer the track unilaterally. With sweeping ambitions to export US-made chips, models, and standards, the plan signals a cutting-edge strategy to rally allies and counter China. But it also takes a big gamble. Rather than co-design AI governance with democratic allies and partners, it pushes a “buy American, trust American” model. This will likely ring hollow for countries across Europe and the Indo-Pacific that have invested heavily in building their own AI rules around transparency, climate action, and digital equity. 

There’s a lot to like in the plan’s push for infrastructure investment and workforce development, which is a necessary step toward building serious AI capacity. But its sidelining of critical safeguards and its dismissal of issues like misinformation, climate change, and diversity, equity, and inclusion continues to have a sandpaper effect on traditional partners and institutions that have invested heavily in aligning AI with public values. If US developers are pressured to walk away from those same principles, the alliance could fray and the social license to operate in these domains will inevitably suffer. 

The United States can lead the way—but not through dominance alone. An alliance is built on the stabilizing forces of trust, not tech stack supply chains or destabilizing attempts to force partners to follow one country’s standards. Building this trust will require working together to respond to the ways that AI shapes our societies, not just unilaterally fixating on its growth. 

Raul Brens Jr. is the director of the GeoTech Center. 


On energy, watch what the plan could do for the grid and batteries

Two energy elements in the AI Action Plan hold bipartisan promise: 

  1. Expanding the electricity grid. The action plan notes the United States should “explore solutions like advanced grid management technologies and upgrades to power lines that can increase the amount of electricity transmitted along existing routes.” In other words, advanced conductors, reconductoring, and dynamic line ratings (and more) are on the table. Both Republicans and Democrats likely agree that transmission and the grid received inadequate investment in the Biden years: The United States built only fifty-five miles of high-voltage lines in 2023, down from the average of 925 miles per year between 2015 and 2019. The University of Pennsylvania estimated that the Inflation Reduction Act’s energy provisions would cost $1.045 trillion from 2023 to 2032, but the bill included only $2.9 billion in direct funding for transmission. 
  1. Funding “leapfrog” dual-use batteries. Next-generation battery chemistries, such as solid-state or lithium-sulfur, could enhance the capabilities of autonomous vehicles and other platforms requiring on-board inference. Virtually all autonomous passenger vehicles run on batteries, and the action plan mentions self-driving cars and logistics applications. Additionally, batteries are a critical military enabler: They are deployed in drones, electronic warfare systems, robots, diesel-electric submarines, directed energy weapons, and more. Given the bipartisan interest in autonomous vehicles and US military competition with Beijing, there may be scope for bipartisan agreement on funding “leapfrog,” dual-use battery chemistries. 

Joseph Webster is a senior fellow at the Atlantic Council’s Global Energy Center and the Indo-Pacific Security Initiative. 


The US and EU see eye-to-eye on AI, up to a point

Despite the ongoing transatlantic friction between Washington and Brussels, much of what was outlined by the White House aligns with much what EU officials have similarly announced in recent months. That includes efforts to reduce bureaucratic red tape to foster AI-enabled industries, the promotion of scientific research to outline a democracy-led approach to the emerging technology, and efforts to understand AI’s impact on the labor force and to upskill workers nationwide.

Yet where problems likely will arise is how Washington seeks to promote a “Make America Great Again” approach to the export of US AI technologies to allies and the wider world. Much of that focuses on prioritizing US interests, primarily against the rise of China and its indigenous AI industry, in multinational standards bodies and other global fora—at a time when the White House has significantly pulled back from previously bipartisan issues like the maintenance of an open and interoperable internet.

This dichotomy—where the United States and EU agree on separate domestic-focused AI industrial policy agendas but disagree on how those approaches are scaled internationally—will likely be a central pain point in the ongoing transatlantic relationship on technology. Finding a path forward between Washington and Brussels must now become a short-term priority at a time when both EU and US officials are threatening tariffs against each other.

Mark Scott is a senior resident fellow at the Digital Forensic Research Lab’s Democracy + Tech Initiative within the Atlantic Council Technology Programs.


The US plan may sound like those of the UK and EU—but the differences are critical

The new AI Action Plan—like its peers from the European Union (EU) and the United Kingdom—is focused on “winning the AI race” through regulatory actions to direct and promote innovation, new investments to create and advance access to crucial AI inputs, and frameworks for international engagement and leadership. Winning the AI race is, in effect, the top priority of all three AI plans, albeit in different ways. While the EU’s AI Act wants to be the first to create regulatory guardrails, the United States’ AI plan has a strong deregulation agenda. In a significant break from other policy measures from this administration to ensure US dominance, this action plan moves away from a purely domestic orientation to the international sphere, flexing the reach of traditional US notions of power. This includes international leadership in frontier technology research and development and adoption, as well as creating global governance standards. It’s a testament to the scarcity, quality, and sizable nature of the inputs needed for global AI dominance that even the Trump administration is thinking through its strategy on AI in terms of global alignment. 

Even as each jurisdiction, including the United States, seeks to position itself as the dominant player in the AI race, there is no common scoreboard for deciding a winner for the game. Each player has devised an ambitious but distinct understanding of this “competition,” and each competition will play out through harnessing a unique combination of industrial, trade, investment, and regulatory policy tools. As the race unfolds in real time, the challenge for US policymakers is to simultaneously create the rules of the game while playing it effectively. A broad range of stakeholders, including AI companies, investors, venture capitalists, safety institutes, and allied governments seek clarity and stability. They all will watch the implementation of the US plan closely to determine their next moves.  

There are two encouraging signs in this action plan when it comes to strengthening US competitiveness:  

First, by prioritizing international diplomacy and security, the United States is positioning itself to influence the global AI playbook that will ultimately determine who reaps economic benefits from AI systems. Leading multilateral coordination on AI positions the United States to secure open markets for AI inputs, shape global adoption pathways, and protect its private sector from regulatory fragmentation and protectionism. 

Second, the plan creates a roadmap for ensuring that the United States and its allies assimilate AI capabilities faster than their adversaries. In this vein, the plan emphasizes the importance of coordinating with allies to implement and strengthen the enforcement of coordinating export controls. 

Ananya Kumar is the deputy director for Future of Money at the GeoEconomics Center. 

Nitansha Bansal is the assistant director of the Cyber Statecraft Initiative. 


The plan accelerates the tension between proprietary and open-source models

The White House’s AI Action Plan explicitly frames model superiority as essential to US dominance, but this creates profound tensions within the US ecosystem itself. As better models attract more users—who, in turn, generate training data for future improvements—we may see a self-reinforcing concentration of power among a few firms. 

This dynamic creates opportunities for leading firms to set safety standards that elevate the entire industry. A clear example is Anthropic’s “race to the top,” where competitive incentives are directly channeled into solving safety problems. When frontier labs adopt rigorous development protocols, market pressures force competitors to match or exceed these standards. However, the darker side of innovation may emerge through benchmark gaming, where pressure to demonstrate superiority incentivizes optimizing for benchmarks rather than genuine capability, risking misleadingly capable systems that excel at tests while lacking true innovation. 

Yet the AI Action Plan’s emphasis on open-source models highlights a more complex competitive landscape than market concentration alone suggests. Open-source strategies are not just defensive moves against domestic monopolization; they also represent offensive tactics in the global AI race, particularly as Chinese open-source models gain traction and threaten to establish alternative standards with millions of users worldwide. 

This dual-track competition between concentrated proprietary excellence and distributed open-source influence fundamentally redefines how firms must compete.  

Success now requires not only racing for capability supremacy but also strategically deciding what to keep proprietary and what to release in order to shape global standards. The plan’s call to “export American AI to allies and partners” through “full-stack deployment packages” suggests that the ultimate competitive advantage may lie not in the superiority of a single model, but in the ability to build dependent ecosystems where US AI becomes the essential infrastructure for global innovation. 

Esteban Ponce de León is a resident fellow at the DFRLab of the Atlantic Council. 


The post Experts react: What Trump’s new AI Action Plan means for tech, energy, the economy, and more  appeared first on Atlantic Council.

]]>
Navigating the new reality of international AI policy https://www.atlanticcouncil.org/blogs/geotech-cues/navigating-the-new-reality-of-international-ai-policy/ Mon, 21 Jul 2025 15:59:47 +0000 https://www.atlanticcouncil.org/?p=833064 To reach their goals of national AI adoption, governments must continue to advance the global policy discussion on trust, safety, and evaluations.

The post Navigating the new reality of international AI policy appeared first on Atlantic Council.

]]>
Since the start of 2025, the strategic direction of artificial intelligence (AI) policy has dramatically shifted to focus on individual nation-states’ ability to win “the global AI race” by prioritizing national technological leadership and innovation. So, what does the future hold for international AI policy? Is there appetite for meaningful work to address AI risks through testing and evaluation? Or will there be a further devolution into national adoption and investment priorities that leave little room for global collaboration?

As India prepares to host the next AI Impact Summit in New Delhi next February, there is an opportunity for national governments to advance discussions on trust and evaluations even amid tensions in the policy conversation between ensuring AI safety and advancing AI adoption. At next year’s summit, national governments should come together to encourage a collaborative, globally coordinated approach to AI governance that seeks to minimize risks while maximizing widespread adoption.

Paris: The AI adoption revolution begins

Initial momentum for policies focused on ensuring AI safety and its potential to pose existential risks to humanity began at the first UK-hosted AI Safety Summit in Bletchley Park in 2023. This discussion was further advanced in subsequent international summits in Seoul, South Korea, and San Francisco, California, in 2024. Yet, as France held the first AI Action Summit in Paris in February of this year, shortly after US President Donald Trump was sworn in for his second term and Prime Minister Keir Starmer took the helm of a brand-new Labour government in the United Kingdom, these discussions on AI risks and safety appeared to lose momentum.

At the AI Action Summit in Paris, French President Emmanuel Macron declared that now is “a time for innovation and acceleration” in AI, while US Vice President JD Vance said that “the AI future is not going to be won by hand-wringing about safety.” As the summit concluded, the United States and the United Kingdom opted not to join other countries in signing the Statement on Inclusive and Sustainable AI for People and Planet. Days later, the United Kingdom renamed its AI Safety Institute to the AI Security Institute, reflecting its shift toward focusing on the national security-related risks stemming from the most advanced AI models as opposed to addressing broader concerns around existential risks to society that AI systems might pose. This approach has also been adopted by the United States, which rebranded the US AI Safety Institute to the Center for AI Standards and Innovation in June.

The Paris AI Action Summit was an early indicator of what the first six months of 2025 would further reveal: a shift away from focusing on the potential existential risks and societal harms posed by AI. Instead, more countries have doubled down on AI research and development investments and the development of secure AI data centers, further increased their focus on extended training for large language models (LLMs), developed national AI adoption mandates, and made proposals to slow down or prevent additional regulation that may inhibit AI adoption.

AI investment and adoption mandates

The United States has taken several steps in this new direction. The Trump administration repealed several Biden-era executive actions on AI during the first few weeks of January, including repealing the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” In February, the administration issued a request for information to develop a new “AI Action Plan,” pursuant to an executive order signed in January called “Removing Barriers to American Leadership in Artificial Intelligence.” The Trump administration’s AI executive order calls for the reduction of administrative and regulatory burdens to AI development and adoption, as well as a further alignment of US AI strategy with national security interests and economic competitiveness goals. Taken together, these actions emphasize an approach that views deregulation as essential for US global leadership in AI.

Simultaneously, a policy debate has emerged in the United States over whether the federal government should preempt state-level AI legislation that would impose regulations on the industry. The industry has been concerned that the numerous and varying approaches to AI legislation being developed at the state level could create a patchwork of regulations that would render the compliance environment complicated and overwhelming.

But even as the debate over federal preemption continues, state-level proposals on AI risk management and governance have stalled. Virginia’s proposed High-Risk Artificial Intelligence Developer and Deployer Act was vetoed by Governor Glenn Youngkin in March. Meanwhile, the Texas Responsible Artificial Intelligence Governance Act was significantly slimmed down before it was signed into law last month, with all references to high-risk AI systems and corresponding prohibitions removed.

Across the pond, the European Union (EU) continues to take steps to identify ways in which the implementation of its cross-cutting EU AI Act can be simplified as part of its AI Continent Action Plan. Industry expressed growing concern over the ability to meet enforcement deadlines without additional guidance and clarifications from the EU AI Office, with forty-four European CEOs calling for a delay in its implementation. This focus on adoption over safety concerns was also reflected at the Group of Seven (G7) Leaders’ Summit held in Canada last month. At the summit, G7 leaders issued a statement on AI for Prosperity, which highlighted the ways in which AI can drive economic growth and benefit people, in addition to laying out a roadmap for AI adoption.

Adapting to this shift in global AI policy

Given this marked shift in the tone of global AI policy discussions, some might wonder whether there are still opportunities to advance conversations on AI trust and safety. Yet, businesses crave certainty and trust remains paramount to creating an ecosystem that supports adoption. Moreover, the AI landscape continues to evolve, requiring continued discussions on “what good looks like” when it comes to AI models used in a variety of enterprise applications and scenarios. Emerging technologies such as agentic AI—AI systems designed to act autonomously, making decisions and taking actions to achieve specific goals with minimal human intervention—as well as evolving enterprise deployment challenges, make it clear that 2025 does not represent the dusk of international AI policy aimed at evaluating and mitigating risks, but a potential dawn.

The upcoming AI Impact Summit in New Delhi presents an opportunity to continue conversations about how creating a robust AI testing and evaluation ecosystem can drive innovation and foster trust, furthering AI security and adoption. There are four key areas that national governments should individually prioritize in their efforts to advance AI adoption while also collaborating on a global level.

1. Assess and address regulatory gaps based on new evolutions in AI technology. Agentic AI is the next evolution of AI technology. Like other iterations of the technology, it can offer significant benefits, but at the same time, either amplify existing risks or introduce new risks because it can execute tasks autonomously. Governments should undertake an assessment of existing regulatory frameworks to ensure they account for any new risks related to agentic AI. Additionally, to the extent that gaps are identified, they should consider the creation of flexible, future-proof frameworks that can be adapted to future evolutions of AI technology.

2. Advance industry-led discussions around open-source and open-weight models, including specific considerations for national security concerns. Transparency and access vary widely across open-source and open-weight models, and researchers and businesses should understand the extent to which models and data sets remain open. Stakeholders—including national governments—need to understand not only what constitutes an open-source or open-weight model, but also what elements of those models are necessary to share downstream. Additionally, enterprises and industry players need certainty around any relevant fault lines for these considerations when choosing partners and third-party vendors when open-source or open-weight models could impact national security. Such discussions will allow enterprises to determine which models and markets offer safe and secure foundations for experimentation and what transparency measures can reasonably be expected.

3. Foster trust by encouraging the development and adoption of AI testing, benchmarks, and evaluations. Governments should encourage the adoption of globally recognized, consensus-based AI testing, benchmarks, and evaluations. Frontier model developers need to be able to understand, analyze, and iterate on their LLMs with the help of detailed performance and safety evaluations. Governments should support the development of robust testing and evaluation frameworks to ensure that such frameworks are fit for purpose, address issues such as a lack of consistency and reliability in how evaluation results are reported, and improve the availability of high-quality and trustworthy evaluation datasets. These frameworks should also be built to further understand and iterate on evaluation results to improve models without overfitting, or creating models that match the training set so closely that they fail to make correct predictions on new data.

4. Drive public-private collaboration across borders to promote AI adoption and drive risk management. The technological conversation is not bound by national borders. Thus, it is important that both public-sector and private-sector stakeholders recognize and harness the interdependence of the AI value chain while engaging in conversations about AI governance and transparency. It is also vital that policymakers and different actors in the AI value chain have a clear understanding of their roles and responsibilities. Enterprises and national governments should continue to use international fora such as the Organisation for Economic Co-operation and Development, the Global Partnership on Artificial Intelligence, the International Network of Safety Institutes, and the United Nations to facilitate public-private collaboration across borders. This will help ensure that different approaches are interoperable and that countries and organizations are best leveraging their own strengths.

***

The world must not lose the gains already made by researchers, policymakers, and enterprises that have been working to address AI risks over the past several years by over-indexing on adoption alone. The answers required to address the challenges and risks associated with AI are intertwined with the ability to capitalize on the opportunities AI presents and can ensure the accountability and security of these technologies for years to come. If AI adoption is the objective, then AI testing, evaluations, and governance are the methods. A collaborative effort to advance AI policy that reflects this fact should be every nation’s priority.


Evi Fuelle is a nonresident senior fellow at the Atlantic Council’s GeoTech Center.

Courtney Lang is a nonresident senior fellow at the Atlantic Council’s GeoTech Center.

Further Reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Navigating the new reality of international AI policy appeared first on Atlantic Council.

]]>
Nikoladze and Donovan cited in Centre for International Governance Innovation report on digital privacy, assets, and decentralization https://www.atlanticcouncil.org/insight-impact/in-the-news/nikoladze-and-donovan-cited-in-centre-for-international-governance-innovation-report-on-digital-privacy-assets-and-decentralization/ Mon, 21 Jul 2025 14:04:28 +0000 https://www.atlanticcouncil.org/?p=860047 Read the full report here.

The post Nikoladze and Donovan cited in Centre for International Governance Innovation report on digital privacy, assets, and decentralization appeared first on Atlantic Council.

]]>
Read the full report here.

The post Nikoladze and Donovan cited in Centre for International Governance Innovation report on digital privacy, assets, and decentralization appeared first on Atlantic Council.

]]>
Ground-zero for the US AI energy challenge: A state-level case study https://www.atlanticcouncil.org/blogs/energysource/ground-zero-for-the-us-ai-energy-challenge-a-state-level-case-study/ Fri, 18 Jul 2025 13:00:00 +0000 https://www.atlanticcouncil.org/?p=860255 Virginia's AI data center boom could double the state's power demand within a decade, forcing residents' electricity bills higher. But with careful planning and partnerships, policymakers can balance energy, economic, and emissions goals.

The post Ground-zero for the US AI energy challenge: A state-level case study appeared first on Atlantic Council.

]]>
AI growth, the advent of “hyperscalers”, and plans for new power-hungry data centers dotting the country from coast to coast have overturned previous assumptions of a stable US energy demand growth outlook. One state in particular is at the epicenter of America’s AI revolution: In 2024 alone, Virginia connected fifteen new data centers and anticipates adding another fifteen by the end of 2025. These are not isolated occurrences: already an established hub for US data centers, a recent WoodMackenzie report showed that Virginia lags only Texas as the top destination for newly announced data centers since January 2023 (boasting over 23,000 MW of capacity in the pipeline). Much of this development has been driven by Northern Virginia’s long-standing “Data Center Alley” concentrated around Washington, DC. Meanwhile, the state’s primary utility company, Dominion Energy, has suggested that the average Virginia ratepayer could see their power bills increase by 50 percent over the next fifteen years driven largely by power-hungry new data centers coming online.

As the Commonwealth considers the anticipated wave of new centers, its policymakers have an unmissable opportunity to lead the state toward a clear-eyed, viable path forward to reap economic benefits while ensuring both the affordability and sustainability of its energy system. None of these issues will be resolved quickly or easily but should be front and center as Virginia voters decide on their new governor this year and a new legislature.

STAY CONNECTED

Sign up for PowerPlay, the Atlantic Council’s bimonthly newsletter keeping you up to date on all facets of the energy transition.

Past meets present

Virginia is hardly unfamiliar with the prospect of adding new energy generation capacity to support data center growth. In addition to being a technology hotspot, the state is already a major destination for energy investment abetted by state and local governments’ decarbonization and clean energy targets. In 2019, then-Governor Ralph Northam (a Democrat) signed an executive order prioritizing clean energy expansion across the state, including goals for Virginia’s power system to achieve 30 percent renewable energy resources by 2030 and 100 percent by 2050. The next year, the Virginia state legislature formalized these commitments in the Virginia Clean Economy Act, which remains in force. 

These policies have borne fruit. One analysis found that Virginia ranks fifth of all US states in percent increase in renewable energy generation over the last decade, led by growth in solar generation capacity sufficient to power 750,000 Virginia residences. Next year, a 2.6 GW offshore wind project is scheduled to come online. Notably, Virginia’s clean energy growth record and long-term aspirations have been maintained under the state’s current Republican leadership.

Expectation vs. reality

Virginia’s renewable and clean energy goals, however, were developed before generative AI was widely commercialized and Virginia became a key destination for data centers. 

report from Virginia’s Joint Legislative Audit and Review Commission describes the growing challenge: while acknowledging that new data centers will benefit Virginia in employment and revenues, it warns that “unconstrained demand for power in Virginia would double within the next ten years, with the data center industry being the main driver.” Moreover, “[b]uilding enough infrastructure to meet unconstrained energy demand will be very difficult to achieve.” The fiscal implications of making necessary investments are potentially enormous with enormous implications for Virginians energy prices and power costs. 

Virginia faces a herculean task to meet incoming demand growth via conventional or any other fuels—let alone address it in a manner that leads to net-zero emissions by midcentury. 

Adding natural gas infrastructure, which currently supplies about half of the state’s electricity, faces two major barriers even apart from decarbonization considerations. First, the availability of new natural gas generation equipment, especially turbines, is sharply limited by supply chain bottlenecks (a situation complicated by uncertainty around the US international tariff slate). Second, constructing new natural gas power plants would likely entail expanding the network of associated infrastructure and interstate pipelines, which are time-consuming endeavors and can ignite local opposition (as the recent saga of the Mountain Valley Pipeline illustrates). 

Similarly, renewables infrastructure can theoretically come online quickly but would entail a massive expansion of transmission, distribution, and long-duration battery storage capacity. Adding renewables also requires community buy-in, which is not always assured

The way forward

Ample consideration must be given to how data centers are managed within Virginia—specifically in terms of regulations and requirements for new builds. Lawmakers debated a comprehensive state-wide AI regulatory proposal that was ultimately vetoed by Governor Glenn Youngkin (a Republican), but it is still possible to address specific energy infrastructure challenges through careful planning. Virginia officials should consider an approach that puts more responsibility on the hyperscalers themselves but also enables a constructive partnership between project developers, investors, policymakers, and local stakeholders:

The role of state officials

State officials could prioritize or incentivize new builds that can bring (and finance) their own on-site energy sources—ideally with abated, low, or zero emissions—to avoid straining the local grid system. They could also encourage new facilities in parts of the state with plentiful water resources such that the generators do not further strain areas vulnerable to water stress. Officials could adopt efficiency requirements for new builds (such as for technologies like advanced conductors) based on existing Power Usage Effectiveness (PUE) criteria, and establish guidance for continuous improvements (similar to the Energy Star model) suitable for this generation of AI. 

Local and municipal leaders’ role

Local and municipal policymakers can take leadership in facilitating shared efficiency and mitigation strategies in high-concentration regions for new builds (such as Northern Virginia). Shared infrastructure (e.g., a distribution system built for a grouping of new centers) can mitigate costs and environmental impacts of new equipment. 

Investor collaboration

Similarly, multiple investor stakeholders working together could procure, prepare, and operate less commercialized fuel sources like small modular reactors, or develop local carbon sequestration and other abatement options. They could also establish a community fund supported by local project developers to provide monies for areas like local transmission and distribution upgrades, regional transmission and upgrades in cases where imported power from other states is necessary. Such measures can help to reduce inflationary pressure for regular ratepayers and could be managed by city/regional officials and be subject to public oversight. 

When to say ‘no”

Importantly, there are likely to be instances where the potential economic benefits associated with certain proposals must be carefully balanced against wider societal impacts—such as the impact of a project on energy access, affordability, and the state’s decarbonization objectives. In cases where proposed data centers fail to meet certain requirements, officials should consider placing them lower in the interconnection queue for review and connection to external power sources. Likewise, new projects may simply be forced to wait regardless of their merits in order to shore up critical infrastructure for constituents who already rely on it. For those developers determined to operationalize as fast as possible, creative solutions or a heightened burden on said developer may be necessary. Policymakers should decide their criteria for “Yes,” “Maybe,” and even “No” since time is of the essence. They should also prepare to coordinate on those policy choices with relevant leadership at other levels of government. 

The here and now

Virginia faces a tremendous task ahead: balancing its energy and climate aspirations with a rapidly changing techno-economic context impacting the entire state is no small feat. Thoughtful consideration of both the immediate benefits and long-term implications of today’s decisions is essential, as is a careful eye to energy insecurity and affordability problems percolating throughout the state as matters stand already. To be sure, the AI revolution shows few signs of slowing down. Appropriate policies to smooth the bumpy road ahead should be prepared here and now. 

Andrea Clabough is a nonresident fellow with the Atlantic Council Global Energy Center

MEET THE AUTHOR

RELATED CONTENT

OUR WORK

The Global Energy Center develops and promotes pragmatic and nonpartisan policy solutions designed to advance global energy security, enhance economic opportunity, and accelerate pathways to net-zero emissions.

The post Ground-zero for the US AI energy challenge: A state-level case study appeared first on Atlantic Council.

]]>
The National Defense Strategy Project https://www.atlanticcouncil.org/in-depth-research-reports/report/the-national-defense-strategy-project/ Thu, 03 Jul 2025 13:00:00 +0000 https://www.atlanticcouncil.org/?p=856288 As the world enters a pivotal new phase in global security, the United States must not only respond to current challenges but also anticipate those on the horizon. 

The post The National Defense Strategy Project appeared first on Atlantic Council.

]]>
What is the biggest threat to the United States—and what should the military do about it? Where should the United States position its forces around the world? How should the US military adapt to the age of artificial intelligence (AI) and the weaponization of space? These are just some of the questions that must be addressed in the next National Defense Strategy (NDS), the foundational document through which any new administration articulates its vision for US defense policy. Published by the Department of Defense (DoD), it establishes the principles that guide US military force design, capability development, global posture, operational planning, and resource allocation.

The second Trump administration’s forthcoming effort is no ordinary NDS. It will define the DoD’s defense posture, US force structure, and modernization priorities for the next four years in a period of intensifying strategic competition, rapid technological disruption, and evolving global threats.

Against this backdrop, the Atlantic Council’s National Defense Strategy Project outlines the priorities the DoD should address in its next NDS. Our experts offer practical recommendations for implementation and identify where the United States must adapt to preserve its strategic edge and strengthen national resilience. A forward-looking defense strategy will be essential to ensuring military readiness, reinforcing deterrence, and protecting national interests—and it will play a pivotal role not only in responding to current challenges but in anticipating those on the horizon.

Read the issue briefs

Related Content

Insights & Impact

Apr 17, 2025

Clementine Starling-Daniels and Theresa Luetkefend co-author DefenseNews op-ed titled “Questions Congress should ask about DOD ‘peace through strength’ plan”

On April 16, Forward Defense director Clementine Starling-Daniels and assistant director Theresa Luetkefend published an op-ed in DefenseNews. The article, titled “Questions Congress should ask about DOD ‘peace through strength’ plan,” analyzes the Department of Defense’s Interim National Defense Strategic Guidance memo and the Trump administration’s defense priorities.

Defense Policy English

Explore the program

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post The National Defense Strategy Project appeared first on Atlantic Council.

]]>
To avoid an AI data-center bubble, Washington must change how it works with US states https://www.atlanticcouncil.org/blogs/new-atlanticist/to-avoid-an-ai-data-center-bubble-washington-must-change-how-it-works-with-us-states/ Wed, 02 Jul 2025 14:49:37 +0000 https://www.atlanticcouncil.org/?p=857553 Without better state-federal coordination, the United States risks sinking billions into stranded capacity, with taxpayers footing the bill.

The post To avoid an AI data-center bubble, Washington must change how it works with US states appeared first on Atlantic Council.

]]>
On Tuesday, the US Senate voted 99-1 to remove a provision in the Trump administration–backed One Big Beautiful Bill Act that would have placed a ten-year moratorium on state-level artificial intelligence (AI) regulation. The stated intent of the pause was to avoid inconsistent subnational policies in the absence of a federal framework. The Senate’s rejection of the pause highlights an important challenge for the administration: The executive branch and state governments do not always see eye to eye on AI, especially when it comes to AI infrastructure. 

The benefits of AI are diffuse and often speculative, while the impacts of the infrastructure buildout—health impacts and higher energy costs for consumers—are often tangible and local. Local opposition has already led, for example, to the blocking or delaying of $64 billion worth of data center projects in the past two years. This opposition complicates the White House’s goal of retaining the United States’ global leadership in AI. 

Even as the Trump administration pushes for a rapid expansion of data centers in the United States, greater coordination between the federal and state levels is needed to avoid oversupply and sunk costs, while ensuring that AI development serves the public interest.

Past as precedent

Data centers can be attractive sources of revenue, especially as investments shift from mature markets such as Virginia, Texas, Arizona, and California, to emerging ones such as Indiana and Louisiana. The “data center gold rush,” as some call it, is exemplified in Stargate, a $500 billion venture led by OpenAI and SoftBank focused on building AI infrastructure. In March, Stargate signed off on its first project, a $7.1 billion data center park in western Texas.

Virginia is home to the world’s largest concentration of data centers, with 5.9 gigawatts (GW) in data center capacity, a further 1.8 GW coming online soon, and a stunning 15.4 GW planned, with much of the new capacity driven by AI. For context, the global total operational data center capacity is 40 GW. Moreover, Virginia’s data center industry already contributes an estimated 74,000 jobs, $5.5 billion in labor income, and $9.1 billion in gross domestic product to the state’s economy each year.

At the same time, the data center market is being built on what may be at best an overestimation and at worst a bubble. A 2023 study notes that the cost of the new generation and transmission buildout required to support data centers could increase costs for all customers. The average household could, for instance, see an increase to its utility bill of between fourteen dollars and thirty-seven dollars per month by 2040.

A haphazard buildout will also have effects on public health. A 2024 McKinsey study quantified the cost-of-healthcare burden on households from increased emissions, finding that the most impacted communities have household incomes below the national median. The “Gold Rush” analogy, ironically, is also fitting in this context: the California Gold Rush of the 1850s displaced entire communities, compromised biodiversity, and distorted labor markets. More recently, the dot-com bubble saw a frenzied buildout of fiber optic capacity, with one post-mortem analysis presciently noting, “The investors who threw money at optics at the peak of the bubble probably would bankrupt us all if we ever hit a true ‘technological singularity.’” (Singularity here refers to a theoretical point of time when technological growth passes a point of no return, profoundly changing human civilization. It is most often cited these days in connection with artificial general intelligence.)

Pitfalls of overinflated expectations

China’s Eastern Data, Western Compute (EDWC) initiative offers both a model and a cautionary tale. Launched in 2022, it aimed to create ten national data center clusters in the country’s resource-rich, sparsely populated western regions. The data center clusters cost an estimated 400 billion renminbi ($55 billion) per year to operate, while the actual turnover in 2021 was 150 billion renminbi ($20 billion). Companies investing in these clusters are relying partly on government subsidies and mostly on expectations of future demand. Data-center service prices have fallen rapidly as supply has exploded. However, vacancy rates remain high. 

AI-directed data centers, compared to traditional ones, have greater operating costs—potentially as much as two times as high—due to higher proportionate hardware costs and the need for always-on readiness. Furthermore, AI data center hardware depreciates as the integrated circuit arrays degrade or become obsolete. As one Chinese tech news outlet put it, “If a highway is built but only a few hundred cars are on the road each year, who will bear the amortized costs?” At the same time, EDWC also demonstrates the benefit of a coordinated approach that rallies the private sector and states toward a set of common objectives. The United States can learn from the pitfalls of overinflated expectations for demand, as well as the potential of such large-scale coordination.

Data about data centers

The US federal government needs to serve as an arbiter, with minimal but targeted interventions guiding investment for the public interest and maximizing the real benefits from AI. The likes of OpenAI, Meta and Amazon may well be able to risk multibillion-dollar investments in AI data centers. But smaller operators will not survive if they cannot recover the upfront investment, as China’s ailing data center market has shown. Furthermore, if the expected demand for AI data centers does not materialize, the costs of building out and running new energy capacity will fall on the remaining customers. Finally, with some jurisdictions seeking moratoriums on building new data centers or facing local backlash to new projects, and others competing to attract data center investments through subsidies and tax breaks, Washington should take on a coordinating role. This means the federal government should provide data points to track the risk of oversupply while respecting the rights of states to assess the impact of new projects on their communities.

In conjunction with the Trump administration’s push to use federal land for data centers, the Department of Energy should be empowered to establish an AI Data Center Monitoring Initiative (AI-DCMI). The AI-DCMI should collaborate with relevant federal entities, including the Department of the Interior, the Federal Energy Regulatory Commission, and the National Institute of Standards and Technology, as well as state energy offices. 

Rather than duplicate established (albeit fragmented) efforts, the AI-DCMI could collate available metrics, such as the Department of Energy’s Data Center Energy Use Report. It could also develop new indicators where gaps remain, especially for assessing the impact on local communities, health, and competitiveness of the AI data center market, as well as the projected utilization of AI infrastructure. Federal and state incentives for data center–related projects, including the lease of federal lands, should be linked to reporting requirements under the AI-DCMI. Finally, consultations with data center operators and other nongovernment stakeholders should inform these metrics. The AI-DCMI should not have a regulatory function. But it should address a critical gap in the current ecosystem: the lack of state-federal coordination on understanding the risks of the current trajectory and concentration of AI data center investments. 

This new Gold Rush will define the United States’ edge in AI—but only if investments are grounded in real demand. Without better state-federal coordination, the United States risks sinking billions into stranded capacity, with taxpayers footing the bill, while rivals surge ahead.


Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

The post To avoid an AI data-center bubble, Washington must change how it works with US states appeared first on Atlantic Council.

]]>
Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage https://www.atlanticcouncil.org/in-depth-research-reports/report/second-order-impacts-of-civil-artificial-intelligence-regulation-on-defense-why-the-national-security-community-must-engage/ Mon, 30 Jun 2025 14:00:00 +0000 https://www.atlanticcouncil.org/?p=844784 Civil regulation of artificial intelligence (AI) is hugely complex and evolving quickly, with even otherwise well-aligned countries taking significantly different approaches. At first glance, little in the content of these regulations is directly applicable to the defense and national security community.

The post Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage appeared first on Atlantic Council.

]]>

Table of contents

Executive summary

Civil regulation of artificial intelligence (AI) is hugely complex and evolving quickly, with even otherwise well-aligned countries taking significantly different approaches. At first glance, little in the content of these regulations is directly applicable to the defense and national security community. The most wide-ranging and robust regulatory frameworks have specific carve-outs that exclude military and related use cases. And while governments are not blind to the need for regulations on AI used in national security and defense, these are largely detached from the wider civil AI regulation debate. However, when potential second-order or unintended consequences on defense from civil AI regulation are considered, it becomes clear that the defense and security community cannot afford to think itself special. Carve-out boundaries can, at best, be porous when the technology is inherently dual use in nature. This paper identifies three broad areas in which this porosity might have a negative impact, including 

  • market-shaping civil regulation that could affect the tools available to the defense and national security community; 
  • judicial interpretation of civil regulations that could impact the defense and national security community’s license to operate; and 
  • regulations that could add additional cost or risk to developing and deploying AI systems for defense and national security. 

This paper employs these areas as lenses through which to assess civil regulatory frameworks for AI to identify which initiatives should concern the defense and national security community. These areas are grouped by the level of resources and attention that should be applied while the civil regulatory landscape continues to develop. Private-sector AI firms with dual-use products, industry groups, government offices with national security responsibility for AI, and legislative staff should use this paper as a roadmap to understand the impact of civil AI regulation on their equities and plan to inject their perspectives into the debate. 

Introduction

Whichever side of this argument—or the gray and murky middle ground—one tends toward, it is clear that artificial intelligence (AI) is an enormously consequential technology in at least two ways. First, the AI revolution will change the way people work, live, and play. Second, the development and adoption of AI will transform the way future wars are fought, particularly in the context of US strategic competition with China. These conclusions, brought to the fore by the seemingly revolutionary advances in generative AI—as typified by ChatGPT and other large multimodal models—are natural conclusions drawn from decades of incremental advances in basic science and digital technologies. As public interest in AI and fears of its misuse rise, governments have started to regulate it. 

Much like AI itself, the global discussion on how best to regulate AI is complex and fast-changing, with big differences in approach seen even between otherwise well-aligned countries. Since the Organisation for Economic Co-operation and Development (OECD) published the first internationally agreed-upon set of principles for the responsible and trustworthy development of AI policies in 2019, the organization has identified more than 930 AI-related policy initiatives across 70 jurisdictions. The comparative analysis presented here reveals huge variation across these initiatives, which range from comprehensive legislation like the European Union (EU) AI Act to loosely managed voluntary codes of conduct, like that agreed to between the Biden administration and US technology companies. Most of the initiatives aim to improve the ability of their respective countries to thrive in the AI age; some aim to reduce the capacity of their competitors to do the same. Some take a horizontal approach focusing on specific sectors, use cases, or risk profiles, while others look vertically at specific kinds of AI systems, and some try to do bits of both. Issues around skills, supply chains, training data, and algorithm development feature varying degrees of emphasis. Almost all place some degree of responsibility on developers of AI systems, albeit voluntarily in the loosest arrangements, but knotty problems around accountability and enforcement remain. 

The defense and national security community has largely kept itself separate from the ongoing debates around civil AI regulation, focusing instead on internally directed standards and processes. The unspoken assumption seems to be that regulatory carve-outs or special considerations will insulate the community, but that view fails to consider the potential second-order implications of civil regulation, which will be market shaping and will affect a whole swath of areas in which defense has significant equity. Furthermore, the race to develop AI tools is itself now an arena of geopolitical competition with strategic consequences for defense and security, with the ability to intensify rivalries, shift economic and technological advantage, and shape new global norms. Relying on regulatory carve-outs for the development and use of AI in defense is likely to prove ineffective at best, and could seriously limit the ability of the United States and its allies to reap the rewards that AI offers as an enhancement to military capabilities on and off the battlefield. 

This paper provides a comparative analysis of the national and international regulatory initiatives that will likely be important for defense and national security, including initiatives in the United States, United Kingdom (UK), European Union, China, and Singapore, as well as the United Nations (UN), OECD, and the Group of Seven (G7). The paper assesses the potential implications of civil AI regulation on the defense and national security community by grouping them into three buckets. 

  • Be supportive: Areas or initiatives that the community should get behind and support in the short term. 
  • Be proactive: Areas that are still maturing but in which greater input is needed and the impact on the community could be significant in the medium term.  
  • Be watchful: Areas that are still maturing but in which uncertain future impacts could require the community’s input.  

Definitions

To properly survey the international landscape, this paper takes a relatively expansive view of regulation and what constitutes an AI system. 

The former is usually understood by legal professionals to mean government intervention in the private domain or a legal rule that implements such intervention.1 In this context, that definition would limit consideration to so-called “hard regulation,” largely comprising legislation and rules enforced by some kind of government organization, and would exclude softer forms of regulation such as voluntary codes of conduct and non-enforceable frameworks for risk assessment and classification. For this reason, this paper interprets regulation more loosely to mean the controlling of an activity or process, usually by means of rules, but not necessarily deriving from government action or subject to formal enforcement mechanisms. When in doubt, if a policy or regulation says it is aimed at controlling the development of AI, this paper takes it at its word. 

To define AI, this paper follows the National Artificial Intelligence Act of 2020, as enacted via the 2021 National Defense Authorization Act, which defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”2 This definition neatly encompasses the current cutting edge of narrow AI systems based on machine learning. At a later date, it might also be expected to include theorized, but not yet realized, artificial general intelligence or artificial superintelligence systems. This paper deliberately excludes efforts to control the production of advanced microchips as a precursor technology to AI, as there is already significant research and commentary on that issue. 

National and supranational regulatory initiatives

United States

Thus far, the US approach to AI regulation can perhaps best be characterized as a patchwork attempting to balance public safety and civil rights concerns with a widespread assumption that US technology companies must be allowed to innovate for the country to succeed. There is consensus that government must play a regulatory role, but a wide range of opinions on what that role should look like.

Overview

Regulatory approach

Overall, the regulatory approach is technology agnostic and focused on specific use cases, especially those relating to civil liberties, data privacy, and consumer protection. 

It should be supplemented in some jurisdictions by additional guidelines for models that are thought to present particularly severe or novel risks. The latter includes generative AI and dual-use foundation models. 

Scope of regulation

Focus on outcomes generated by AI systems with limited consideration of individual models or algorithms, except dual-use foundation model elements that use a compute-power threshold definition. 

At the federal level, heads of government agencies are individually responsible for the use of AI within their organizations, including third-party products and services. This includes training data, with particular focus on the use of data that are safety, rights, or privacy impacting as defined in existing regulation. 

Type of regulation

At the federal level, regulation should entail voluntary arrangements with industry and incorporation of AI-specific issues into existing hard regulation through adapting standards, risk management, and governance frameworks. 

Some states have put in place bespoke hard regulation of AI, including disclosure requirements, but this is generally focused on protecting existing consumer and civil rights regimes.

Target of regulation

At the federal level, voluntary arrangements are aimed at developers and deployers of AI-enabled systems and intended to protect the users of those systems, with particular focus on public services provided by or through federal agencies. Service providers might not be covered due to Section 230 of the Communications Act.

At the state level, some legislatures have placed more specific regulatory requirements on developers and deployers of AI-enabled systems to their populations, but the landscape is uneven and evolving. 

Coverage of defense and national security

Defense and national security are covered by separate regulations at the federal level, with bespoke frameworks for different components of the community. State-level regulation does not yet incorporate sector-specific use cases, but domestic policing, counterterrorism, and the National Guard could fall under future initiatives.  

Federal regulation

At the federal level, AI has been a rare area of bipartisan interest and relative agreement in recent years. The ideas raised in 2018 by then President Donald Trump in Executive Order (EO) 13845 can be traced through subsequent Biden-era initiatives, including voluntary commitments to manage the risks posed by AI, which were agreed upon with leading technology companies in mid-2023.3 However, other elements of the Biden approach to AI—such as the 2022 Blueprint for an AI Bill of Rights, which focused on potential civil rights harms of AI, and the more recent EO14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—were unlikely to survive long, with the latter explicitly called out for reversal in the 2024 Republican platform.4 Trump was able to follow through on this easily because, while EO14110 was a sweeping document that gave elements of the federal government 110 specific tasks, it was not law and was swiftly overturned.5

While EO14110 was revoked, it is not clear what might replace it.6 It seems likely that the Biden administration’s focus on protecting civil rights as laid out by the Office of Management and Budget (OMB) will become less prominent, but the political calculus is complicated and revising Biden-era AI regulation is not likely to be at the top of the Trump administration’s to-do list.7 So, the change of administration does not necessarily mean that all initiatives set in motion by Biden will halt.8 Before EO14110 was issued, at least a dozen federal agencies had already issued guidance on the use of AI in their jurisdictions and more have since followed suit.9 These may well survive, especially the more technocratic elements like the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (NIST Framework), which is due to be expanded to cover risks that are novel to, or exacerbated by, the use of generative AI.10 The NIST Framework, along with guidance on secure software development practices related to training data for generative AI and dual-use foundation models, and a plan for global engagement on AI standards, are voluntary tools and generally politically uncontentious.11

In Congress, then-Senate Majority Leader Chuck Schumer (D-NY) led the AI charge with a program of educational Insight Forums, which led to the Bipartisan Senate AI Working Group’s Roadmap for AI Policy.12 Some areas of the roadmap support the Biden administration’s approach, most notably support for NIST, but overall it is more concerned with strengthening the US position vis-à-vis international competitors than it is with domestic regulation.13 No significant legislation on AI is on the horizon, and the roadmap’s level of ambition is likely constrained by dynamics in the House of Representatives, given that Speaker Mike Johnson is on the record arguing against overregulation of AI companies.14 A rolling set of smaller legislative changes is more likely than an omnibus AI bill, and the result will almost certainly be a regulatory regime more complex and distributed than that in the EU.15 This can already be seen in the defense sector, where the 2024 National Defense Authorization Act (NDAA) references AI 196 times and includes provisions on public procurement of AI, which were first introduced in the Advancing American AI Act.16 These provisions require the Department of Defense (DoD) to develop and implement processes to assess its ethical and responsible use of AI and a study analyzing vulnerabilities in AI-enabled military applications.17

Beyond the 2024 NDAA, the direction of travel in the national security space is less clear. The recently published National Security Memorandum (AI NSM) seemingly aligns with Trump’s worldview.18 Its stated aims are threefold: first, to maintain US leadership in the development of frontier AI systems; second, to facilitate adoption of those systems by the national security community; and third, to build stable and responsible frameworks for international AI governance.19 The AI NSM supplements self-imposed regulatory frameworks already published by the DoD and the Office of the Director of National Intelligence. But, unlike those existing frameworks, the AI NSM is almost exclusively concerned with frontier AI models.20 The AI NSM mandates a whole range of what it calls “deliberate and meaningful changes” to the ways in which the US national security community deals with AI, including significant elevation in power and authority for chief AI officers across the community.21 However, the vast majority of restrictive provisions are found in the supplementary Framework to Advance AI Governance and Risk Management in National Security, which takes an EU-style, risk-based approach with a short list of prohibited uses (including the nuclear firing chain), a longer list of “high-impact” uses that are permitted with greater oversight, and robust minimum-risk management practices to include pre-deployment risk assessments.22 Comparability with EU regulation is unlikely to endear the AI NSM to Trump, but it is interesting to note that Biden’s National Security Advisor Jake Sullivan argued that restrictive provisions for AI safety, security, and trustworthiness are key components of expediting delivering of AI capabilities, saying, “preventing misuse and ensuring high standards of accountability will not slow us down; it will actually do the opposite.”23 An efficiency-based argument is likelier with a Trump administration focused on accelerating AI adoption. 

State-level regulation

According to the National Conference of State Legislators, forty-five states introduced AI bills in 2024, and thirty-one adopted resolutions or enacted legislation.24 These measures tend to focus on consumer rights and data privacy, but with significantly different approaches seen in the three states with the most advanced legislation: California, Utah, and Colorado.25

Having previously been a leader in data privacy legislation, the California State Legislature in 2024 passed what would have been the most far-reaching AI bill in the country before it was vetoed by Governor Gavin Newsom.26 The bill had drawn criticism for potentially imposing arduous, and damaging, barriers to technological development in exactly the place where most US AI is developed.27 However, Newsom supported a host of other AI-related bills in 2024 that will place significant restrictions and safeguards around the use of AI in California, indicating that the country’s largest internal market will remain a significant force in the domestic regulation of AI.28

Colorado and Utah both successfully enacted AI legislation in 2024. Though both are consumer rights protection measures at their core, they take very different approaches. The Utah bill is quite narrowly focused on transparency and consumer protection around the use of generative AI, primarily through disclosure requirements placed on developers and deployers of AI services.29 The Colorado bill is more broadly aimed at developers and deployers of “high-risk” AI systems, which here means an AI system that is a substantial factor in making any decision that can significantly impact an individual’s legal or economic interests, such as decisions related to employment, housing, credit, and insurance.30 This essentially gives Colorado a separate anti-discriminatory framework just for AI systems, which imposes reporting, disclosure, and testing obligations with civil penalties for violation.31 This puts Colorado, not California, at the leading edge of state-level AI regulation, but that does not necessarily mean that other states will take the Colorado approach as precedent. In signing the law, Governor Jared Polis made clear that he had reservations, and a similar law was vetoed in Connecticut.32 Some states might not progress restrictive AI regulation at all. For example, Virginia Governor Glenn Youngkin recently issued an executive order aiming to increase the use of AI in state government agencies, law enforcement, and education, but there is no indication that legislation will follow anytime soon.33

However state-level legislation progresses, it is unlikely to have any direct impact on military or national security users. There is also a risk that public fears around AI could be stoked and lead to more stringent state-level regulation, especially if AI is seen to “go wrong,” leading to tangible examples of public harm. As discussed below in the context of the European Union, the use of AI in law enforcement is among the most controversial use cases. This can only be more relevant in the nation with some of the most militarized police forces in the world and a National Guard that can also serve a domestic law-enforcement role.34

International efforts

The United States has been active in a number of international initiatives relating to AI regulation, including through the UN, NATO, and the G7 Hiroshima process, which are covered later in this paper. The final element of the Biden administration’s approach to AI regulation, and the one that might be the least likely to carry through into 2025, was the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.35 The declaration is a set of non-legally binding guidelines that aims to promote responsible behavior and demonstrate US leadership in the international arena. International norms are notoriously hard to agree upon and even harder to enforce. Unsurprisingly, the declaration makes no effort to restrict the kinds of AI systems that signatories can develop in their pursuit of national defense. According to the DoD, forty-seven nations have endorsed the declaration, though China, Russia, and Iran are notably not among that number.36

China

The Chinese approach to AI regulation is relatively straightforward compared to that of the United States, with rules issued in a top-down, center-outward manner in keeping with the general mode of Chinese government.

Overview

Regulatory approach

China has a vertical, technology-driven approach with some horizontal, use-case, and sectoral elements. 

It is focused on general-purpose AI, with some additional regulation for specific use cases.

Scope of regulation

The primary unit of regulation is AI algorithms, with specific restrictions on the use of training data in some cases. 

Type of regulation

China uses hard regulation with a strong compliance regime and significant room for politically interested interpretation in enforcement.

Target of regulation

Regulation is narrowly targeted to privately owned service providers operating AI systems within China and those entities providing AI-enabled services to the Chinese population. 

Coverage of defense and national security

These areas are not covered and unlikely to be covered in the future. 

Domestic regulation

Since 2018, the Chinese government has issued four administrative provisions intended to regulate delivery of AI capabilities to the Chinese public, most notably the so-called Generative AI Regulation, which came into force in August 2023.37 This, and preceding provisions on the use of algorithmic recommendations in service provision and the more general use of deep synthesis tools, is focused on regulating algorithms rather than specific use cases.38 This vertical approach to regulation is also iterative, allowing Chinese regulators to build skills and toolsets that can adapt as the technology develops. A more comprehensive AI law is expected at some point but, at the time of writing, only a scholars’ draft released by the Chinese Academy of Social Sciences (CASS) gives outside observers insight into how the Chinese government is thinking about future AI regulation.39

The draft proposes the formation of a new government agency to coordinate and oversee AI in public services. Importantly, and unlike in the United States, the use of AI by the Chinese government itself is not covered by any proposed or existing regulations, including for military and other national security purposes. This approach will likely not change, as it serves the Chinese government’s primary goal, which is to preserve its central control over the flow of information to maintain internal political and social stability.40 The primary regulatory tool proposed by the scholars’ draft is a reporting and licensing regime in which items that appear on a negative list would require a government-approved permit for development and deployment. This approach is a way for the Chinese government to manage safety and other risks while still encouraging innovation.41 The draft is not clear about what items would be on the list, but foundational models are explicitly referenced. In addition to an emerging licensing regime and ideas about the role of a bespoke regulator, Chinese regulations have reached interim conclusions in areas in which the United States and others are still in debate. For example, the Generative AI Regulation explicitly places liability for AI systems on the service providers that make them available to the Chinese public.42

Enforcement is another area in which the Chinese government is signaling a different approach. As one commentator notes, “Chinese regulation is stocked with provisions that are straight off the wish list for AI to support supposed democratic values [. . .] yet the regulation is clearly intended to strengthen China’s authoritarian system of government.”43 Analysis from the East Asia Forum suggests that China is continuing to refine how it balances innovation and control in its approach to AI governance.44 If this is true, then the vague language in Chinese AI regulations, which would give Chinese regulators huge freedom in where and how they make enforcement decisions, could be precisely the point.45

International efforts

As noted above, China has not endorsed the United States’ Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, but China is active on the international AI stage in other ways. At a 2018 meeting relating to the United Nations Convention on Certain Conventional Weapons, the Chinese representative presented a position paper proposing a ban on lethal autonomous weapons (LAWS).46 But Western observers doubt the motives behind the proposal, with one commentator saying it included “such a bizarrely narrow definition of lethal autonomous weapons that such a ban would appear to be both unnecessary and useless.”47 China has continued calling for a ban on LAWS in UN forums and other public spaces, but these calls are usually seen in the West as efforts to appear as a positive international actor while maintaining a position of strategic ambiguity—there is little faith that the Chinese government will practice what it preaches.48 This is most clearly seen in reactions to the Global Security Initiative (GSI) concept paper published in February 2023.49 Reacting to this proposal, which China presented as aspiring for a new and more inclusive global security architecture, the US-China Economic and Security Review Commission (USCC) responded with scorn, saying, “the GSI’s core objective appears to be the degradation of U.S.-led alliances and partnerships under the guise of a set of principles full of platitudes but empty on substantive steps for contributing to global peace.”50

Outside of the military sphere, Chinese involvement in international forums attracts similar critique. In the lead-up to the United Kingdom’s AI Safety Summit, the question of whether China would be invited, and then whether Beijing’s representatives would attend, caused controversy and criticism.51 However, that Beijing is willing to collaborate internationally in areas where it sees benefit does not mean that Beijing will toe the Western line. In fact, Western-led international regulation might not even be a particular concern for China. Shortly after the AI Safety Summit, Chinese President Xi Jinping announced a new Global AI Governance Initiative.52 As with the GSI, this effort has been met with skepticism in the United States, but there is a real risk that China’s approach could split international regulation into two spheres.53 This risk is especially salient because of the initiative’s potential appeal to the Global South. More concerningly, there is some evidence that China is pursuing a so-called proliferation-first approach, which involves pushing its AI technology into developing countries. If China manages to embed itself in the global AI infrastructure in the way that it did with fifth-generation (5G) technology, then any attempt to regulate international standards might come too late—those standards will already be Chinese.54

European Union

The European Union moved early into the AI regulation game. In August 2024, it became the first legislative body globally to issue legally binding rules around the development, deployment, and use of AI.55 Originally envisaged as a consumer protection law, early drafts of the AI Act covered AI systems only as they are used in certain narrowly limited tasks—a horizontal approach.56 However, the explosion of interest in foundational models following the release of ChatGPT in late 2022 led to an expansion in the law’s scope to include these kinds of models regardless of how and by whom they are used.

Overview

Regulatory approach

The approach is horizontal, with a vertical element for general-purpose AI systems. 

Specific use cases are based on risk assessment. 

Scope of regulation

The scope is widest for high-risk and general-purpose AI systems. This includes data, algorithms, applications, and content provenance. 

Hardware is not covered, but general-purpose AI system elements use a compute-power threshold definition. 

Type of regulation

The EU uses hard regulation with high financial penalties for noncompliance. 

A full compliance and enforcement regime is still in development but will incorporate the EU AI Office and member states’s institutions. 

Target of regulation

The regulation targets AI developers, with more limited responsibilities placed on deployers of high-risk systems. 

Coverage of defense and national security

Defense is specifically excluded on institutional competence grounds, but domestic policing use cases are covered, with some falling into the unacceptable and high-risk groups.

Internal regulation

The AI Act is an EU regulation, the strongest form of legislation that the EU can produce, and is binding and directly applicable in all member states.57 The AI Act takes a risk-based approach whereby AI systems are regulated by how they are used, based on the potential harm that use could cause to an EU citizen’s health, safety, and fundamental rights. There are four categories of risk: unacceptable, high, limited, and minimal/none. Systems in the limited and minimal categories are subject to obligations around attribution and informed consent, i.e., people must know they are talking to a chatbot or viewing an AI-generated image. At the other end of the scale, AI systems that fall within the unacceptable risk category are completely prohibited. This includes any AI system used for social scoring, unsupervised criminal profiling, or workplace monitoring; systems that exploit vulnerabilities or impair a person’s ability to make informed decisions via manipulation; biometric categorization of sensitive characteristics; untargeted use of facial recognition; and the use of real-time remote biometric identification systems in public spaces, except for narrowly defined police use cases.58

High-risk systems are subject to the most significant regulation in the AI Act and are defined as such by two mechanisms. First, AI systems used as a safety component or within a kind of product already subject to EU safety standards are automatically high risk.59 Second, AI systems are considered high risk if they are used in the following areas: biometrics; critical infrastructure; education and vocational training; employment, worker management, and access to self-employment; access to essential services; law enforcement; migration, asylum, and border-control management; and administration of justice and democratic processes.60 The majority of obligations fall on developers of high-risk AI systems, with fewer obligations placed on deployers of those systems.61

As mentioned, so-called general-purpose AI (GPAI) is covered separately in the AI Act. This addition was a significant bone of contention in the trilogue negotiation, as some member states were concerned that vertical regulation of specific kinds of AI would stifle innovation in the EU.62 As a compromise, though all developers of GPAI must provide technical documentation and instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training, the more stringent obligations akin to those imposed on developers of high-risk systems are reserved for GPAI models that pose “systemic risk.”63 Open-license developers must comply with these restrictions only if their models fall into this last category.64

It is not yet clear exactly how the new European AI Office will coordinate compliance, implementation, and enforcement. As with all new EU regulation, interpretation through national and EU courts will be critical.65 One startling feature of the AI Act is the leeway it appears to give the technology industry by allowing developers to self-determine their AI system’s risk category, though the huge financial penalties those who violate the act  face might serve as sufficient deterrent to bad actors.66

The AI Act does not, and could never, apply directly to military or defense applications of AI because the European Union does not have authority in these areas. As expected, the text includes a general exemption for military, defense, and national security uses, but exemptions for law enforcement are far more complicated and were some of the most controversial sections in final negotiations.67 Loopholes allowing police to use AI in criminal profiling, if it is part of a larger, human-led toolkit, and the use of AI facial recognition on previously recorded video footage have caused uproar and seem likely candidates for litigation, potentially placing increased costs and uncertainty on developers working in these areas.68 This ambiguity could have knock-on effects, given the increasing overlap between military technologies and those used by police and other national security actors, especially in counterterrorism. 

International efforts

The official purpose of the AI Act is to set consistent standards across member states in order to ensure that the single market can function effectively, but some believe that this will lead the EU to effectively become the world’s AI police.69 Part of this is the simple fact that it will be a lot easier for other jurisdictions to copy and paste a regulatory model that has already been proven, but concern comes from the way that the General Data Protection Regulation (GDPR) has had huge influence outside of the territorial boundaries of the EU by placing a high cost of compliance on companies that want to do business in or with the world’s second-largest economic market.70 Similarly, EU regulations on the kinds of charging ports that can be used for small electronic devices have resulted in changes well beyond its borders.71 However, more recently, Apple has decided to hold back on releasing AI features to users in the EU, indicating that cross-border influence can run both ways.72

United Kingdom

Since 2022, the UK government has described its approach to AI regulation as innovation-friendly and flexible, designed to service the potentially contradictory goals of encouraging economic growth through innovation while also safeguarding fundamental values and the safety of the British public.73 This approach was developed under successive Conservative governments but is yet to change radically under the Labour government as it attempts to balance tensions between business-friendly elements of the party and more traditional labor activists and trade unionists.74

Overview

Regulatory approach

The approach is horizontal and sectoral for now, with some vertical elements possible for general-purpose AI systems. 

Scope of regulation

The scope is unclear. Guidance to regulators refers primarily to AI systems with some consideration of supply chain components. It will likely vary by sector. 

Type of regulation

There is hard regulation through existing sectoral regulators and their compliance and enforcement regimes, with the possibility of more comprehensive hard regulation in the future. 

Target of regulation

The target varies by sector. Guidance to existing regulators generally focuses on AI developers and deployers. 

Coverage of defense and national security

Bespoke military and national security frameworks sit alongside a broader government framework. 

Domestic regulation

The UK’s approach to AI regulation was first laid out in June 2022, followed swiftly by a National AI Strategy that December and a subsequent policy paper in August 2023, which set out the mechanisms and structures of the regulatory approach in more detail.75 However, this flurry of policy publications has not resulted in any new laws.76 During the 2024 general election campaign, members of the new Labour government initially promised to toughen AI regulation, including by forcing AI companies to release test data and conduct safety tests with independent oversight, before taking a more conciliatory tone with the technology industry and promising to speed up the regulatory process to encourage innovation.77 Though its legislative agenda initially included appropriate legislation for AI by the end of 2024, this has not been realized.78 The prevailing view seems to be that, with some specific exceptions, existing regulators are best placed to understand the needs and peculiarities of their sectors.79

Some regulators are already taking steps to incorporate AI into their frameworks. The Financial Conduct Authority’s Regulatory Sandbox allows companies to test AI-enabled products and services in a controlled environment and, by doing so, to identify consumer protection safeguards that might be necessary.80 The Digital Regulation Cooperation Forum (DRCF) recently launched its AI and Digital Hub, a twelve-month pilot program to make it easier for companies to launch new AI products and services in a safe and compliant manner, and to reduce the time it takes to bring those products and services to market.81

Though the overall approach is sectoral, there is some central authority in the UK approach. The Office for AI has no regulatory role but is expected to provide certain central functions required to monitor and evaluate the effectiveness of the regulatory framework.82 Another centrally run AI authority, the AI Safety Institute (AISI), breaks from the sectoral approach and instead focuses on “advanced AI,” which includes GPAI systems as well as narrow AI models that have the potential to cause harm in specific use cases.83 While AISI is not a regulator, several large technology companies, including OpenAI, Google, and Microsoft, have signed voluntary agreements to allow AISI to test these firms’ most advanced AI models and make changes to them if they find safety concerns.84 However, now that AISI has found significant flaws in those same models, both AISI and the companies have stepped back from that position, demonstrating the inherent limitations of voluntary regimes. In recognition of this dilemma, the forthcoming legislation referenced above is expected to make existing voluntary agreements between companies and the government legally binding.85

The most significant challenge to the current sector-based approach is likely to come from the UK Competition and Markets Authority (CMA). Having previously taken the view that flexible guiding principles would be sufficient to preserve competition and consumer protection, the CMA is now concerned that a small number of technology companies increasingly have the ability and incentive to engage in market-distorting behavior in their own interests.86 The CMA has also proposed prioritizing GPAI under new regulatory powers provided by the Digital Markets, Competition and Consumers Bill (DMCC).87 A decision to do so could have a huge impact on the AI industry, as the DMCC significantly sharpens the CMA’s teeth, giving it the power to impose fines for violation of up to 10 percent of global turnover without involvement of a judge, as well as smaller fines for senior individuals within corporate entities and consumer compensation.88

As in the United States, it is expected that any UK legislative or statutory effort to expand the regulatory power of government over AI will have some kind of exemption for national security usage.89 But, as in the United States, it does not follow that the national security community will be untouched by regulation. The UK Ministry of Defence (UK MOD) published its own AI strategy in June 2022, accompanied by a policy statement on the ethical principles that the UK armed forces will follow in developing and deploying AI-enabled capabilities.90 Both documents recognize that the use of AI in the military sphere comes with a specific set of risks and concerns that are potentially more acute than those in other sectors. These documents also stress that the use of any technology by the armed forces and their supporting organizations is already subject to a robust regime of compliance for safety, where the Defence Safety Agency has enforcement authorities; and legality, where existing obligations under UK and international human rights law and the law of armed conflict form an irreducible baseline.  

The UK’s intelligence community does not have a director of national intelligence to issue community-wide guidance on AI, but the Government Communications Headquarters (GCHQ) offers some insight into how the relevant agencies are thinking about the issue.91 Published in 2021, GCHQ’s paper on the Ethics of Artificial Intelligence predates the current regulatory discussion but slots neatly into the sectoral approach.92 In the paper, GCHQ points to existing legislative provisions that ensure its work complies with the law. Most relevant for discussion of AI is the role of the Technology Advisory Panel (TAP), which sits within the Investigatory Powers Commissioner’s Office and advises on the impact of new technologies in covert investigations.93 The implicit argument underpinning both the UK MOD and GCHQ approaches is that specific regulations or restrictions on the use of AI in national security are needed only insofar as AI presents risks that are not captured by existing processes and procedures. Ethical principles, like the five to which the UK MOD will hold itself, are intended to frame and guide those risk assessments at all stages of the capability development and deployment process, but they are not in themselves regulatory.94 As civil regulation of AI develops, it will be necessary to continue testing the assumption that the existing national security frameworks are capable of addressing AI risks and to change them as needed, including to ensure that they are sufficient to satisfy a supply base, international community, and public audience that might expect different standards. 

International efforts

In addition to active participation in multilateral discussions through the UN, OECD, and the G7, the United Kingdom has held itself out to be a global leader in AI safety. The inaugural Global AI Safety Summit held in late 2023 delivered the Bletchley Declaration, a statement signed by twenty-eight countries in which they agreed to work together to ensure “human-centric, trustworthy and responsible AI that is safe” and to “promote cooperation to address the broad range of risks posed by AI.”95 The Bletchley Declaration has been criticized for its focus on the supposed existential risks of GPAI at the expense of more immediate safety concerns and for its lack of any specific rules or roadmap.96 But it gives an indication of the areas of AI regulation in which it might be possible to find common ground, which, in turn, might limit the risk of entirely divergent regulatory regimes.97

Singapore

With a strong digital economy and a global reputation as pro-business and pro-innovation, Singapore is unsurprisingly approaching AI regulation along the same middle path between encouraging growth and preventing harms as the United Kingdom.98 Unlike the United Kingdom, Singapore has carefully maintained its position as a neutral player between the United States and China, and this positioning is reflected in its strategy documents and public statements.99

Overview

Regulatory approach

The approach is horizontal and sectoral for now, with a future vertical element for general-purpose AI systems. 

Scope of regulation

The proposed Model AI Governance Framework for Generative AI includes data, algorithms, applications, and content provenance. 

In practice, it will vary by sector. 

Type of regulation

It is hard regulation through existing sectoral regulators and their compliance and enforcement regimes. 

Target of regulation

The targets include developers, application deployers, and service providers/hosting platforms. 

Responsibility is allocated based on the level of control and differentiated by the stage in the development and deployment cycle. 

Coverage of defense and national security

No publicly available framework. 

Domestic regulation

Government activity in the area is driven by the second National AI Strategy (NAIS 2.0), which is partly a response to the increasing concern over the safety and security of AI, especially GPAI.100 NAIS 2.0 clearly recognizes that there are security risks associated with AI, but it places relatively little emphasis on threats to national security. According to NAIS 2.0, the government of Singapore wants to retain agility in its approach to regulating AI, a position backed by public statements by senior government figures. Singapore’s approach to AI regulation is sectoral and based, at least for the time being, on existing regulatory frameworks. Singapore’s regulatory bodies have been actively incorporating AI into their toolkits, most notably through the Model AI Governance Framework jointly issued by the information communications and data-protection regulators in 2019 and updated in 2020.101 The framework is aimed at private-sector organizations developing or deploying AI in their businesses. It provides guidance on key ethical and governance issues and is supported by a practical Implementation and Self-Assessment Guide and Compendium of Use Cases to make it easier for companies to map the sector- and technology-agnostic framework onto their organizations.102 Singaporean regulators have begun to issue sector-specific guidelines for AI, including the advisory guideline on the use of personal data for AI systems that provide recommendations, predictions, and decisions.103 Like the wider framework, these are non-binding and do not expand the enforcement powers of existing regulators. 

Singapore has leaned heavily on technology industry partnerships in developing other elements of its regulatory toolkit, especially its flagship AI Verify product.104 AI Verify is a voluntary governance testing framework and toolkit that aims to help companies objectively verify their systems against a set of global AI governance and ethical frameworks so that participating firms can demonstrate to users that the companies have implemented AI responsibly. AI Verify works within a company’s own digital enterprise environment and, as a self-testing and self-reporting toolkit, it has no enforcement power.105 However, the government of Singapore hopes that, by helping to identify commonalities across various global AI governance frameworks and regulations, it can build a baseline for future international regulations.106 One critical limitation of AI Verify is that it cannot test GPAI models.107 The AI Verify Foundation, which oversees AI Verify, recognized this limitation and recently conducted a public consultation to expand the 2020 Model AI Governance Framework to explicitly cover generative AI.108 The content of the final product is not yet known, and there is no indication that the government intends to translate this new framework into a bespoke AI law, but the consultation document gives important clues about how Singapore is thinking about issues such as accountability; data, including issues of copyright; testing and assurance; and content provenance.109

As mentioned, the government of Singapore places relatively little emphasis on national security in its AI policy documents, but that does not mean it is not interested or investing in AI for military and wider national security purposes.110 In 2022, Singapore became the first country to establish a separate military service to address threats in the digital domain.111 Unlike in the United States, where cyber and other digital specialties are spread across the traditional services, the Digital and Intelligence Service (DIS) brings together the whole domain, from command, control, communications, and cyber operations to implementing strategies for cloud computing and AI.112 The DIS also has specific authority to raise, train, and sustain digital forces.113 Within the DIS, the Digital Ops-Tech Centre is responsible for developing AI technologies, but publicly available information about it is sparse.114 Singapore has deployed AI-enabled technologies through the DIS on exercises, and the Defence Science and Technology Agency (DSTA) has previously stated that it wants to integrate AI into operational platforms, weapons, and back-office functions, but the Singaporean Armed Forces have not published any official position on the use of AI in military systems.115

International efforts

Singapore is increasingly taking on a regional leadership role on AI regulation. As chair of the 2024 Association of South-East Asian Nations (ASEAN) Digital Ministers’ Meeting, Singapore was instrumental in developing the ASEAN Guide on AI Governance and Ethics.116 The guide aims to establish common principles and best practices for trustworthy AI in the region but does not attempt to force a common regulatory approach. In part, this is because the ASEAN region is so politically diverse that it would be almost impossible to reach agreement on hot-button issues like censorship, but also because member countries are at wildly different levels of digital maturity.117 At the headline level, the guide bears significant similarity to US, EU, and UK policies, in that it takes a risk-based approach to governance, but the guide makes concessions to national cultures in a way that those other approaches do not.118 It is possible that some ASEAN nations might move toward a more stringent EU-style regulatory framework in the future. But, as the most mature AI power in the region, Singapore and its pro-innovation approach will likely remain influential for now.

International regulatory initiatives

At the international level, four key organizations have taken steps into the AI regulation waters—the UN, OECD, the G7 through its Hiroshima Process, and NATO. 

OECD

The OECD published its AI Principles in 2019, and they have since been agreed upon by forty-six countries, including all thirty-eight OECD member states.119 Though not legally binding, the OECD principles have been extremely influential, and it is possible to trace the five broad topic areas through all of the national and supranational approaches discussed previously.120 The OECD also provides the secretariat for the Global Partnership on AI, an international initiative promoting responsible AI use through applied co-operation projects, pilots, and experiments.121 The partnership covers a huge range of activity through its four working groups, and, though defense and national security do not feature explicitly, there are initiatives that could be influential in other forums that consider those areas. For example, the Responsible AI working group is developing technical guidelines for implementation of high-level principles that will likely influence the UN and the G7, and the Data Governance working group is producing guidelines on co-generated data and intellectual-property considerations that could have an impact on the legal use of data for training algorithms.122 Beyond these specific areas of interest, the OECD will likely remain influential in the wider AI regulation debate, not least because it has built a wide network of technical and policy experts to draw from. This value was seen in practice when the G7 asked the Global Partnership on AI to assist in developing the International Guiding Principles on AI and a voluntary Code of Conduct for AI developers that came out of the Hiroshima Process.123

Regulatory approach

The approach is horizontal and risk based.  

Scope of regulation

Regulation applies to AI systems and associated knowledge. In theory, this scope covers the whole stack. 

There is some specific consideration of algorithms and data through the Global Partnership on AI. 

Type of regulation

Regulation is soft, with no compliance regime or enforcement mechanism. 

Target of regulation

“AI actors” include anyone or any organization that plays an active role in the AI system life cycle. 

Coverage of defense and national security

None.  

G7

The G7 established the Hiroshima AI Process in 2023 to promote guardrails for GPAI systems at a global level. The Comprehensive Policy Framework agreed to by the G7 digital and technology ministers later that year includes a set of International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for GPAI developers.124 As with the OECD AI Principles on which they are largely based, neither of these documents is legally binding. However, by choosing to focus on practical tools to support development of trustworthy AI, the Hiroshima Process will act as a benchmark for countries developing their own regulatory frameworks.125 There is some evidence that this is already happening and a suggestion that the EU might adopt a matured version of the Hiroshima Code of Conduct as part of its AI Act compliance regime.126 That will require input from the technology sector, including current and future suppliers of AI for defense and national security.  

The G7 is also taking a role in other areas that might impact AI regulation, most notably technical standards and international data flows. On the former, the G7 could theoretically play a coordination role in ensuring that disparate national standards do not lead to an incoherent regulatory landscape that is time consuming and expensive for the industry to navigate.127 However, diverging positions even within the G7 might make that difficult.128 The picture emerging in the international data flow space is only a little more optimistic. The G7 has established a new Institutional Arrangement for Partnership (IAP) to support its Data Free Flow with Trust (DFFT) initiative, but it has not yet produced any tangible outcomes.129 The EU-US Data Privacy Framework has made some progress in reducing the compliance burden associated with cross-border transfer of data through the EU-US Data Bridge and its UK-US extension, but there is still a large risk that the Court of Justice of the European Union will strike it down over concerns that it violates GDPR.130

Regulatory approach

The approach is vertical. The Hiroshima Code of Conduct applies only to general-purpose AI. 

Scope of regulation

The scope is GPAI systems, with significant focus on data, particularly data sharing and cross-border transfer. 

Type of regulation

Regulation is soft, with no compliance regime or enforcement mechanism. 

Target of regulation

Developers of GPAI are the only target. 

Coverage of defense and national security

None.  

United Nations

The UN has been cautious in its approach to AI regulation. The UN Educational, Scientific, and Cultural Organization (UNESCO) issued its global standard of AI ethics in 2021 and established the AI Ethics and Governance Lab to produce tools to help member states asses their relative preparedness to implement AI ethically and responsibly, but these largely drew on existing frameworks rather than adding anything new.131 Interest in the area ballooned following the release of ChatGPT, such that Secretary-General António Guterres convened an AI Advisory Body in late 2023 to provide guidance on future steps for global AI governance. That report, published in late 2024 and titled “Governing AI for Humanity,” did not recommend a single governance model, but it proposed establishing a regular AI policy dialogue within the UN to be supported by an international scientific panel of AI experts.132 Specific areas of concern include the need for consistent global standards for AI and data, and mechanisms to facilitate inclusion of the Global South and other currently underrepresented groups in the international dialogue on AI.133 A small AI office will be established within the UN Secretariat to coordinate these efforts.  

At the political level, the General Assembly has adopted two resolutions on AI. The first, Resolution 78/L49 on the promotion of “safe, secure and trustworthy” artificial AI systems, was drafted by the United States but drew co-sponsorship support from a wide range of countries, including some in the Global South.134 The second, Resolution 78/L86, drafted by China and supported by the United States, calls on developed countries to help developing countries strengthen their AI capacity building and enhance their representation and voice in global AI governance.135 Adoption of both resolutions by consensus could indicate global support for Chinese and US leadership on AI regulation, but the depth of that support remains unclear.136 Notably, following the adoption of Resolution 78/L86, two separate groups were established, one led by the United States and Morocco, and the other by China and Zambia.137

There is also disagreement over the role of the UN Security Council (UNSC) in addressing AI-related threats. Resolution 78/L49 does not apply to the military domain but, when introducing the draft, the US permanent representative to the UN suggested that it might serve as a model for dialogue in that area, albeit not at the UNSC.138 The UNSC held its first formal meeting focused on AI in July 2023.139 In his remarks, the secretary-general noted that both military and non-military applications of AI could have implications for global security and welcomed the idea of a new UN body to govern AI, based on the model of the International Atomic Energy Agency.140 The council has since expressed its commitment to consider the international security implications of scientific advances more systematically, but some members have raised concerns about framing the issue narrowly within a security context. At the time of writing, this remains a live issue.141

Regulatory approach

The approach is horizontal with a focus on the Sustainable Development Goals.

Scope of regulation

AI systems are broadly defined, with particular focus on data governance and avoiding biased data. 

Type of regulation

Regulation is soft, with no compliance regime or enforcement mechanism. 

Target of regulation

Resolutions refer to design, development, deployment, and use of AI systems. 

Coverage of defense and national security

Resolutions exclude military use, but there have been some discussions in the UNSC. 

NATO

NATO is not in the business of civil regulation, but it plays a major role in military standards and is included here for completeness. 

The Alliance formally adopted its first AI strategy in 2021, well before the advent of ChatGPT and other forms of GPAI.142 At that time, it was not clear how NATO intended to overcome different approaches to governance and regulatory issues among allies, nor was it obvious which of the many varied NATO bodies with an interest in AI would take the lead.143 The regulatory issue has, in some ways, become more settled with the advent of the EU’s AI Act, in that the gaps between European and non-European allies are clearer. Within NATO itself, the establishment of the Data and Artificial Intelligence Review Board (DARB) under the auspices of the assistant secretary-general for innovation, hybrid, and cyber places leadership of the AI agenda firmly within NATO Headquarters rather than NATO Allied Command Transformation.144 One of the DARB’s first priorities is to develop a responsible AI certification standard to ensure that new AI projects meet the principles of responsible use set out in the 2021 AI Strategy.145 Though this certification standard has not yet been made public, NATO is clearly making some progress in building consensus across allies. However, NATO is not a regulatory body and has no enforcement role, so it will require member states to self-police or transfer that enforcement role to a third-party organization.146

NATO requires consensus to make decisions and, with thirty-two members, consensus building is not straightforward or quick, especially on contentious issues. Technical standards might be easier for members to agree on than complex, normative issues, and technical standards are an area in which NATO happens to have a lot of experience.147 The NATO Standardization Office (NSO) is often overlooked in discussions of the Alliance’s successes, but its work to develop, agree to, and implement standards across all aspects of the Alliance’s operational and capability development has been critical.148 As the largest military standardization body in the world, NSO is uniquely placed to determine which civilian AI standards apply to military and national security use cases and identify areas where niche standards are needed. 

Regulatory approach

The approach is horizontal. AI principles apply to all types of AI. 

Scope of regulation

AI systems are broadly defined. 

Type of regulation

Regulation is soft. NATO has no enforcement mechanism, but interoperability is a key consideration for member states and might drive compliance. 

Target of regulation

The target is NATO member states developing and deploying AI within their militaries.

Coverage of defense and national security

The regulation is exclusively about this arena. 

Analysis

The regulatory landscape described above is complex and constantly evolving, with big differences in approach seen even between otherwise well-aligned countries. However, by breaking various approaches into their component parts, it is possible to see some common themes.  

Common themes

Regulatory approach

The general preference seems to be for a sectoral or use-case-based approach, framed as a pragmatic attempt to balance competing requirements to promote innovation while protecting users. However, there is increasing concern that some kinds of AI, notably large language models and other forms of GPAI, should be regulated with a vertical, technology-based approach. China looks like an outlier here, in that its approach is vertical with horizontal elements rather than the other way around, but in practice the same regulatory ground could be covered. 

Scope

There is little consensus around which elements of AI should be regulated. In cases where the framework refers simply to “AI systems” without saying explicitly whether that includes training data, specific algorithms, packaged applications, etc., it is possible to infer the intended scope through references in implementation guidance and other documentation. This approach makes sense in jurisdictions where the regulatory approach relies on existing sectoral regulators with varying focus. For example, a regulator concerned with the delivery of public utilities might be concerned with the applications deployed by the utilities providers, whereas a financial services regulator might need to look deeper into the stack to consider the underlying data and algorithms. China is again the outlier, as its regulation is specifically focused on the algorithmic level, with some coverage of training data in specific cases. 

Type of regulation

The EU and China are, so far, the only jurisdictions to have put in place hard regulations specifically addressing AI. Most other frameworks rely on existing sectoral regulators incorporating AI into their work, voluntary guidelines and best practices, or a combination of both. It is possible that the EU’s AI Act will become a model as countries increasingly turn to a legislative approach, but practical concerns and lengthy timelines mean that most compliance and enforcement regimes will remain fragmented for now. 

Target group

Almost all of the frameworks place some degree of responsibility on developers of AI systems, albeit voluntarily in the loosest arrangements. Deployers of AI systems and the service providers that make them available are less widely included. There is some suggestion that assignment of responsibility might vary across the AI life cycle, though what this means in practice is unclear, and only Singapore suggests differentiating between ex ante and ex post responsibility. Even in cases in which responsibility is clearly ascribed, it is likely that questions of legal liability for misuse or harm will take time to be worked out through the relevant judicial system. China is again an outlier here, but a more comprehensive AI law could include developers and deployers. 

Impact on defense and national security

At first glance, little of the civil regulatory frameworks discussed above relates directly to the defense and national security community, but there are at least three broad areas in which the defense and national security community might be subject to second-order or unintended consequences. 

  • Market-shaping civil regulations could affect the tools available to the defense and national security community. This area could include direct market interventions, such as modifications to antitrust law that might force incumbent suppliers to break up their companies, or second-order implications of interventions that affect the sorts of skills available in the market, the sorts of problems that skilled AI workers want to work on, and the data available to them. 
  • Judicial interpretation of civil regulations could impact the defense and national security communities’ license to operate, either by placing direct limitations on the use of AI in specific use cases, such as domestic counterterrorism, or more indirectly through concerns around legal liability. 
  • Regulations could add hidden cost or risk to the development and deployment of AI systems for defense and national security use. This area could include complex compliance regimes or fragmented technical standards that must be paid for somewhere in the value chain, or increased security risks associated with licensing or reporting of dual-use models. 

By using these areas as lenses through which to assess the tools and approaches found within civil regulatory frameworks, it is possible to begin picking out specific areas and initiatives of concern to the defense and national security community. The tables below make an initial assessment of the potential implications of civil regulation of AI on the defense and national security community by grouping them into three buckets. 

  • Be supportive: Areas or initiatives that the community should get behind and support in the short term. 
  • Be proactive: Areas that are still maturing but in which greater input is needed and the impact on the community could be significant in the medium term. 
  • Be watchful: Areas that are still maturing but in which uncertain future impacts could require the community’s input. 

The content of these tables is by no means comprehensive, but it gives an indication of areas in which the defense and national security community might wish to focus its resources and attention while the civil regulatory landscape continues to develop.

Be supportive

Areas or initiatives that the community should get behind and support in the short term

Be proactive

Areas that are still maturing but in which greater input is needed and the impact on the community could be significant in the medium term.

Be watchful

Areas that are still maturing but in which uncertain future impacts could require the community’s input

Conclusion

The AI regulatory landscape is complex and fast-changing, and likely to remain so for some time. While most of the civil regulatory approaches described here exclude defense and national security applications of AI, the intrinsic dual-use nature of AI systems means that the defense and national security community cannot afford to think of or view itself in isolation. This paper has attempted to look beyond the rules and regulations that the community chooses to place on itself to identify areas in which the boundary with civil-sector regulation is most porous. In doing so, this paper has demonstrated that regulatory carve-outs for defense and national security uses must be part of a broader solution ensuring the community’s needs and perspectives are incorporated into civil frameworks. The areas of concern identified are just a first cut of the potential second-order and unintended consequences that could limit the ability of the United States and its allies to reap the rewards that AI offers as an enhancement to military capability on and off the battlefield. Private-sector AI firms with dual-use products, industry groups, government offices with national security responsibility for AI, and legislative staff should use this paper as a roadmap to understand the impact of civil AI regulation on their equities and plan to inject their perspectives into the debate. 

About the author

Deborah Cheverton is a nonresident senior fellow in the Atlantic Council’s Forward Defense program within the Scowcroft Center for Strategy and Security and a senior trade and investment adviser with the UK embassy. 

Acknowledgements

The author would like to thank Primer AI for its generous support in sponsoring this paper. It would not have been possible without help and constructive challenge from the entire staff of the Forward Defense program, especially the steadfast support of Clementine Starling-Daniels, the editorial and grammatical expertise of Mark Massa, and the incredible patience of Abigail Rudolph.

Related content

Explore the program

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

1    Barak Orbach, “What Is Regulation?” Yale Journal on Regulation, July 25, 2016, https://www.yalejreg.com/bulletin/what-is-regulation/.
2    William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, PubL. 116-283.PS, 134 STAT. 3388 (2021) https://www.congress.gov/116/plaws/publ283/PLAW-116publ283.pdf
3    The other EOs overridden by President Biden were: EO13859 Maintaining American Leadership in Artificial Intelligence and EO13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. “Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” White House, press release, July 21, 2023, https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/
4    “AI Bill of Rights Making Automated Systems Work for the American People,” White House, October 2022, https://marketingstorageragrs.blob.core.windows.net/webfiles/Blueprint-for-an-AI-Bill-of-Rights.pdf; “RNC 2024 Platform,” Republican National Committee, July 8, 2024, https://www.presidency.ucsb.edu/documents/2024-republican-party-platform.
5    Ronnie Kinoshita, Luke Koslosky, and Tessa Baker, “The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap,” Center for Security and Emerging Technology, May 3, 2024, https://cset.georgetown.edu/article/eo-14410-on-safe-secure-and-trustworthy-ai-trackers.
6    Jeff Tollefson, et al., “What Trump’s Election Win Could Mean for AI, Climate and Health,” Nature, November 8, 2024, https://www.nature.com/articles/d41586-024-03667-w; Gyana Swain, “Trump Taps Sriram Krishnan for AI Advisor Role amid Strategic Shift in Tech Policy,” CIO, Demember 23, 2024, https://ramaonhealthcare.com/trump-taps-sriram-krishnan-for-ai-advisor-role-amid-strategic-shift-in-tech-policy/
7    Trump’s allies are divided on AI. While Trump himself is friendly to the AI industry, polling shows that many Americans are worried about the impact on their jobs. Julie Ray, “Americans Express Real Concerns about Artificial Intelligence,” Gallup, August 27, 2024, https://news.gallup.com/poll/648953/americans-express-real-concerns-artificial-intelligence.aspx.
8    “OMB Releases Final Guidance Memo on the Government’s Use of AI,” Crowell & Moring, April 9, 2024, https://www.crowell.com/en/insights/client-alerts/omb-releases-final-guidance-memo-on-the-governments-use-of-ai; Gabby Miller and Justin Hendrix, “Where US Tech Policy May Be Headed during a Second Trump Term,” Tech Policy Press, November 7, 2024, https://www.techpolicy.press/where-us-tech-policy-may-be-headed-during-a-second-trump-term/; Harry Booth and Tharin Pillay, “What Donald Trump’s Win Means for AI,” Time, November 8, 2024, https://time.com/7174210/what-donald-trump-win-means-for-ai.
9    Ellen Glover, “AI Bill of Rights: What You Should Know,” Built In, March 19, 2024, https://builtin.com/artificial-intelligence/ai-bill-of-rights.
10    “AI Risk Management Framework. Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology, 2023, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf; “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” National Institute of Standards and Technology, 2024, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf.
11    Harold Booth, et al., “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models,” National Institute of Standards and Technology, April 2024, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-218A.pdf; Jesse Dunietz, et al., “A Plan for Global Engagement on AI Standards,” National Institute of Standards and Technology, 2024, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-5.pdf.
12    The Insight Forums took input from experts in the field on subjects ranging from workforce implications and copyright concerns to doomsday scenarios and questions around legal liability. Gabby Miller, “US Senate AI ‘Insight Forum’ Tracker,” Tech Policy Press, December 8, 2023, https://www.techpolicy.press/us-senate-ai-insight-forum-tracker.
13    Chuck Schumer, et al., “Driving US Innovation in Artificial Intelligence,” US Senate, May 15, 2024, https://www.schumer.senate.gov/imo/media/doc/Roadmap_Electronic1.32pm.pdf.
14    The House of Representatives AI Task Force Report was published too late for inclusion in this paper. Prithvi Iyer and Justin Hendrix, “Reactions to the Bipartisan
US House AI Task Force Report,” Tech Policy Press, December 20, 2024, https://www.techpolicy.press/reactions-to-the-bipartisan-us-house-ai-task-force-report/;
Maria Curi, “What We’re Hearing: Speaker Johnson on AI,” Axios, May 2, 2024, https://www.axios.com/pro/tech-policy/2024/05/02/speaker-johnson-on-ai; Gopal Ratnam, “Schumer’s AI Road Map Might Take GOP Detour,” Roll Call, November 13, 2024, https://rollcall.com/2024/11/13/schumers-ai-road-map-might-take-gop-detour/.
15    Amber C. Thompson, et al., “Senate AI Working Group Releases Roadmap for Artificial Intelligence Policy,” Mayer Brown, May 17, 2024, https://www.mayerbrown.com/en/insights/publications/2024/05/senate-ai-working-group-releases-roadmap-for-artificial-intelligence-policy.
16    “National Defense Authorization Act for Fiscal Year 2024,” US Congress, 2023, https://www.congress.gov/bill/118th-congress/house-bill/2670.
17    “Summary of the Fiscal Year 2024 National Defense Authorization Act FY 2024,” US Senate Committee on Armed Services, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_conference_executive_summary1.pdf. It is possible that the 2025 NDAA could be used to progress new AI legislation.
18     “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence,” White House, October 24, 2024, https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/.
19    Provisions relating to especially sensitive national security issues, such as countermeasures for adversarial use of AI, are reserved to a classified annex.
20    Examples of self-imposed regulation include: “DOD Adopts Ethical Principles for Artificial Intelligence,” US Department of Defense, February 24, 2020, https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/; Joseph Clark, “DOD Releases AI Adoption Strategy,” US Department of Defense, November 2, 2023, https://www.defense.gov/News/News-Stories/Article/Article/3578219/dod-releases-ai-adoption-strategy; “DOD Directive 3000 09 Autonomy in Weapon Systems,” US Department of Defense, January 25, 2023, https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf; “Artificial Intelligence Ethics Framework for the Intelligence Community,” Office of the Director of National Intelligence, June 2020, https://www.intelligence.gov/artificial-intelligence-ethics-framework-for-the-intelligence-community. For full analysis of the AI NSM, see: Gregory C. Allen and Isaac Goldston, “The Biden Administration’s National Security Memorandum on AI Explained,” Center for Strategic and International Studies, October 25, 2024, https://www.csis.org/analysis/biden-administrations-national-security-memorandum-ai-explained.
21    Ibid.
22    “Framework to Advance AI Governance and Risk Management in National Security,” White House, October 24, 2024, https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf.
23    “Remarks by APNSA Jake Sullivan on AI and National Security,” White House, October 25, 2024, https://www.whitehouse.gov/briefing-room/speeches-remarks/2024/10/24/remarks-by-apnsa-jake-sullivan-on-ai-and-national-security.
24    “Artificial Intelligence 2024 Legislation,” National Conference of State Legislators, June 3, 2024, https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation
25    Brian Joseph, “Common Themes Emerge in State AI Legislation,” Capitol Journal, April 16, 2024, https://www.lexisnexis.com/community/insights/legal/capitol-journal/b/state-net/posts/common-themes-emerge-in-state-ai-legislation; John J. Rolecki, “Emerging Trends in AI Governance: Insights from State-Level Regulations Enacted in 2024,” National Law Review, January 6, 2025, https://natlawreview.com/article/emerging-trends-ai-governance-insights-state-level-regulations-enacted-2024.
26    Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, SB-1047 (2024), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047.
27    Hodan Omaar, “California’s Bill to Regulate Frontier AI Models Undercuts More Sensible Federal Efforts,” Center for Data Innovation, February 20, 2024, https://datainnovation.org/2024/02/californias-bill-to-regulate-frontier-ai-models-undercuts-more-sensible-federal-efforts; Bobby Allyn, “California Gov. Newsom Vetoes AI Safety Bill That Divided Silicon Valley,” NPR, September 29, 2024, https://www.npr.org/2024/09/20/nx-s1-5119792/newsom-ai-bill-california-sb1047-tech.
28    Hope Anderson, Nick Reem, and Sara Tadayyon, “Raft of California AI Legislation Adds to Growing Patchwork of US Regulation,” White & Case, October 10, 2024, https://www.whitecase.com/insight-alert/raft-california-ai-legislation-adds-growing-patchwork-us-regulation; Myriah V. Jaworski and Ali Bloom, “A View from California: One Important Artificial Intelligence Bill Down, 17 Others Good to Go,” Clark Hill, November 5, 2024, https://www.clarkhill.com/news-events/news/a-view-from-california-one-important-artificial-intelligence-bill-down-17-others-good-to-go.
29    Scott Young and Jordan Hilton, “Utah Enacts AI-Focused Consumer Protection Bill,” Mayer Brown, May 13, 2024, https://www.mayerbrown.com/en/insights/publications/2024/05/utah-enacts-ai-focused-consumer-protection-bill.
30    “Colorado Enacts Groundbreaking Artificial Intelligence Act,” Troutman Pepper Locke, May 29, 2024, https://www.regulatoryoversight.com/2024/05/colorado-enacts-groundbreaking-artificial-intelligence-act.
31    Jake Parker, “Misgivings Cloud First-In-Nation Colorado AI Law: Implications and Considerations for the Security Industry,” Security Industry Association, May 28, 2024, https://www.securityindustry.org/2024/05/28/misgivings-cloud-first-in-nation-colorado-ai-law-implications-and-considerations-for-the-security-industry.
32    Bente Birkeland, “In Writing the Country’s Most Sweeping AI Law, Colorado Focused on Fairness, Preventing Bias,” NPR, June 22, 2024, https://www.npr.org/2024/06/22/nx-s1-4996582/artificial-intelligence-law-against-discrimination-hiring-colorado.
33    Daniel Castro, “Virginia’s New AI Executive Order Is a Model for Other States to Build On,” Center for Data Innovation, February 16, 2024, https://datainnovation.org/2024/02/virginias-new-ai-executive-order-is-a-model-for-other-states-to-build-on.
34    “War Comes Home: The Excessive Militarization of American Police,” American Civil Liberties Union, June 23, 2014, https://www.aclu.org/publications/war-comes-home-excessive-militarization-american-police; Anshu Siripurapu and Noah Berman, “What Does the U.S. National Guard Do?” Council on Foreign Relations, April 3, 2024, https://www.cfr.org/backgrounder/what-does-us-nationa-guard-do.
35    “Fact Sheet: The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” US Department of State, November 27, 2024, https://www.state.gov/political-declaration-on-the-responsible-military-use-of-artificial-intelligence-and-autonomy.
36    Brandi Vincent, “US Eyes First Multinational Meeting to Implement New ‘Responsible AI’ Declaration,” DefenseScoop, January 9, 2024, https://defensescoop.com/2024/01/09/us-eyes-first-multinational-meeting-to-implement-new-responsible-ai-declaration.
37    “How Does China’s Approach to AI Regulation Differ from the US and EU?” Forbes, July 18, 2023, https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/?sh=47763973351c.
38    Matt Sheehan, “China’s AI Regulations and How They Get Made,” Carnegie Endowment for International Peace, July 10, 2023, https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en.
39    CASS is an official Chinese think tank operating under the State Council. “China’s New AI Regulations,” Latham & Watkins Privacy & Cyber Practice, August 16, 2023, https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf; Zac Haluza, “How Will China’s Generative AI Regulations Shape the Future?” DigiChina Forum, April 26, 2023, https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum; Zeyi Yang, “Four Things to Know about China’s New AI Rules in 2024,” MIT Technology Review, January 17, 2024, https://www.technlogyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024.
40    Sheehan, “China’s AI Regulations and How They Get Made.”
41    Graham Webster, et al., “Analyzing an Expert Proposal for China’s Artificial Intelligence Law,” DigiChina, Stanford University, August 29, 2023, https://digichina.stanford.edu/work/forum-analyzing-an-expert-proposal-for-chinas-artificial-intelligence-law.
42    Mark MacCarthy, “The US and Its Allies Should Engage with China on AI Law and Policy,” Brookings, October 19, 2023, https://www.brookings.edu/articles/the-us-and-its-allies-should-engage-with-china-on-ai-law-and-policy.
43    Matt O’Shaughnessy, “What a Chinese Regulation Proposal Reveals about AI and Democratic Values,” Carnegie Endowment for International Peace, May 16, 2023, https://carnegieendowment.org/posts/2023/05/what-a-chinese-regulation-proposal-reveals-about-ai-and-democratic-values?lang=en.
44    Huw Roberts and Emmie Hine, “The Future of AI Policy in China,” East Asia Journal, September 27, 2023, https://eastasiaforum.org/2023/09/27/the-future-of-ai-policy-in-china/.
45    Will Henshall, “How China’s New AI Rules Could Affect U.S. Companies,” Time, September 19, 2023, https://time.com/6314790/china-ai-regulation-us.
46    “CCW/GGE.1/2018/WP.7 Position Paper: Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects,” China in Delegation to UN-CCW, April 11, 2018, https://unoda-documents-library.s3.amazonaws.com/Convention_on_Certain_Conventional_Weapons_-_Group_of_Governmental_Experts_(2018)/CCW_GGE.1_2018_WP.7.pdf.
47    Gregory C. Allen, “Understanding China’s AI Strategy,” Center for a New American Security, February 6, 2019, https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy.
48    Putu Shangrina Pramudia, “China’s Strategic Ambiguity on the Issue of Autonomous Weapons Systems,” Global: Jurnal Politik Internasional 24, 1 (2022), https://scholarhub.ui.ac.id/global/vol24/iss1/1/; Gregory C. Allen, “One Key Challenge for Diplomacy on AI: China’s Military Does Not Want to Talk,” Center for Strategic and International Studies, May 20, 2022, https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk.
49    “Full Text: The Global Security Initiative Concept Paper,” Embassy of the People’s Republic of China, 2023, http://cr.china-embassy.gov.cn/esp/ndle/202302/t20230222_11029046.htm.
50    Sierra Janik, et al., “China’s Paper on Ukraine and next Steps for Xi’s Global Security Initiative,” US-China Economic and Security Review Commission, July 17, 2024, https://www.uscc.gov/research/chinas-paper-ukraine-and-next-steps-xis-global-security-initiative.
51    Joyce Hakmeh, “Balancing China’s Role in the UK’s AI Agenda,” Chatham House, October 30, 2023, https://www.chathamhouse.org/2023/10/balancing-chinas-role-uks-ai-agenda.
52    “Global AI Governance Initiative,” Embassy of the People’s Republic of China, 2023, http://gd.china-embassy.gov.cn/eng/zxhd_1/202310/t20231024_11167412.htm
53    Shannon Tiezzi, “China Renews Its Pitch on AI Governance at World Internet Conference,” Diplomat, November 9, 2023, https://thediplomat.com/2023/11/china-renews-its-pitch-on-ai-governance-at-world-internet-conference
54    Bill Drexel and Hannah Kelley, “Behind China’s Plans to Build AI for the World,” Politico, November 30, 2023, https://www.politico.com/news/magazine/2023/11/30/china-global-ai-plans-00129160.
55    “AI Act Enters into Force,” European Commission, August 1, 2024, https://commission.europa.eu/news/ai-act-enters-force-2024-08-01_en.
56    The AI Act is formally called the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Legislative Acts.
57    Hadrien Pouget, “Institutional Context: EU Artificial Intelligence Act,” EU Artificial Intelligence Act, 2019, https://artificialintelligenceact.eu/context.
58    “Chapter 2, Article 5—Prohibited AI Practices in Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance),” EUR-Lex, European Union, 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.
59    This covers a huge swath of consumer devices including toys, medical devices, motor vehicles, and gas-burning appliances.
60    “Chapter 3, Section 1, Article 5—Classification Rules for High-Risk AI Systems in Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance),” EUR-Lex, European Union, 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.
61    Developers of high-risk AI systems must implement comprehensive risk-management and data-governance practices throughout the life cycle of the system; meet standards for accuracy, robustness, and cybersecurity; and register the system in an EU-wide public database. Mia Hoffmann, “The EU AI Act: A Primer,” Center for Security and Emerging Technology, Georgetown University, September 26, 2023, https://cset.georgetown.edu/article/the-eu-ai-act-a-primer.
62    Jedidiah Bracy, “EU AI Act: Draft Consolidated Text Leaked Online,” International Association of Privacy Professionals, January 22, 2024, https://iapp.org/news/a/eu-ai-act-draft-consolidated-text-leaked-online.
63    “Chapter 5, Section 1, Article 51—Classification of General-Purpose AI Models as General-Purpose AI Models with Systemic Risk and Article 52—Procedure in Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance),” EUR-Lex, European Union, 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.
64    Lisa Peets, Marianna Drake, and Marty Hansen, “EU AI Act: Key Takeaways from the Compromise Text,” Inside Privacy, February 28, 2024, https://www.insideprivacy.com/artificial-intelligence/eu-ai-act-key-takeaways-from-the-compromise-text.
65    Hadrien Pouget and Johann Laux, “A Letter to the EU’s Future AI Office,” Carnegie Endowment for International Peace, 2023, https://carnegieendowment.org/2023/10/03/letter-to-eu-s-future-ai-office-pub-90683
66    Hoffman, “The EU AI Act: A Primer”; Osman Gazi Güçlütürk, Siddhant Chatterjee, and Airlie Hilliard, “Penalties of the EU AI Act: The High Cost of Non-Compliance,” Holistic AI, February 18, 2024, https://www.holisticai.com/blog/penalties-of-the-eu-ai-act.
67    Jedidah Bracy and Alex LaCasse, “EU Reaches Deal on World’s First Comprehensive AI Regulation,” International Association of Privacy Professionals, December 11, 2023, https://iapp.org/news/a/eu-reaches-deal-on-worlds-first-comprehensive-ai-regulation.
68    Gian Volpicelli, “EU Set to Allow Draconian Use of Facial Recognition Tech, Say Lawmakers,” Politico, January 16, 2024, https://www.politico.eu/article/eu-ai-facial-recognition-tech-act-late-tweaks-attack-civil-rights-key-lawmaker-hahn-warns.
69    Melissa Heikkilä, “Five Things You Need to Know about the EU’s New AI Act,” MIT Technology Review, December 11, 2023, https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act.
70    Jennifer Wu and Martin Hayward, “International Impact of the GDPR Felt Five Years On,” Pinsent Masons, June 6, 2023, https://www.pinsentmasons.com/out-law/analysis/international-impact-of-the-gdpr-felt-five-years-on.
71    Kevin Purdy, “USB-C Is Now the Law of the Land in Europe,” Wired, January 3, 2025, https://www.wired.com/story/usb-c-is-now-a-legal-requirement-for-most-rechargeable-gadgets-in-europe.
72    Apple has said that this decision isn’t related to the AI Act, but rather the earlier Digital Markets Act (DMA), which aims to prevent large companies from abusing their market power with massive fines of up to 10 percent of the company’s total worldwide annual turnover, or up to 20 percent in the event of repeated infringements. “Apple’s AI Has Now Been Released but It’s Not Coming to Europe,” Euronews and Associated Press, October 29, 2024, https://www.euronews.com/next/2024/10/29/apples-ai-has-now-been-released-but-its-not-coming-to-europe-any-time-soon.
73    Paul Shepley and Matthew Gill, “Artificial Intelligence: How Is the Government Approaching Regulation?” Institute for Government, October 27, 2023, https://www.instituteforgovernment.org.uk/explainer/artificial-intelligence-regulation.
74    Vincent Manancourt, Tom Bristow, and Laurie Clarke, “Friend or Foe: Labour’s Looming Battle on AI,” Politico, October 12, 2023, https://www.politico.eu/article/friend-or-foe-labour-party-keir-starmer-looming-battle-ai-artificial-intelligence.
75     “Establishing a Pro-Innovation Approach to Regulating AI,” UK Government, July 18, 2022, https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement; “National AI Strategy,” Government of the United Kingdom, September 22, 2021, https://www.gov.uk/government/publications/national-ai-strategy; “A Pro-Innovation Approach to AI Regulation,” Government of the United Kingdom, March 22, 2023, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#executive-summary.
76     This decision is likely, in part, a result of political pragmatism (legislation takes time and parliamentary time is limited) but it also reflects the nature of the United Kingdom’s parliamentary system, which allows the government of the day significant leeway in interpretation of primary legislation, including through secondary legislation and various kinds of subordinate regulatory instruments that may be delegated to public bodies. “Understanding Legislation,” Parliament of the United Kingdom, 2018, https://www.legislation.gov.uk/understanding-legislation
77     Tom Bristow, “Labour Will Toughen up AI Regulation, Starmer Says,” Politico, June 13, 2023, https://www.politico.eu/article/starmer-labour-will-bring-in-stronger-ai-regulation; Dan Milmo, “Labour Would Force AI Firms to Share Their Technology’s Test Data,” Guardian, February 4, 2024, https://www.theguardian.com/technology/2024/feb/04/labour-force-ai-firms-share-technology-test-data.
78     “King’s Speech,” Hansard, UK Parliament, July 17, 2024, https://hansard.parliament.uk/Commons/2024-07-17/debates/2D7D3E47-776E-4B81-8E2A-7854168D6FED/King%E2%80%99SSpeech; Anna Gross and George Parker, “UK’s AI Bill to Focus on ChatGPT-Style Models,” Financial Times, August 1, 2024, https://www.ft.com/content/ce53d233-073e-4b95-8579-e80d960377a4.
79    “A Pro-Innovation Approach to AI Regulation.”
80    “Regulatory Sandbox,” Financial Conduct Authority, August 1, 2023, https://www.fca.org.uk/firms/innovation/regulatory-sandbox.
81    DRCF brings together the four UK regulators with responsibilities for digital regulation—the Competition and Markets Authority (CMA), the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), and Ofcom—to collaborate on digital regulatory matters. “The DRCF Launches Informal Advice Service to Support Innovation and Enable Economic Growth,” Digital Regulation Cooperation Forum, April 22, 2024, https://www.drcf.org.uk/publications/press-releases/the-drcf-launches-informal-advice-service-to-support-innovation-and-enable-economic-growth
82    This includes through implementation guidelines, 10 million pounds of funding to boost regulators’ capabilities in AI, and ensuring interoperability with international regulatory frameworks. “Implementing the UK’s AI Regulatory Principles Initial Guidance for Regulators,” Government of the United Kingdom, February 2024, https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators.
83    “Introducing the AI Safety Institute,” Government of the United Kingdom, last updated January 17, 2024, https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute; “AI Safety Institute Approach to Evaluations,” Government of the United Kingdom, February 9, 2024, https://www.gov.uk/government/publications/ai-safety-institute-approach-to-evaluations/ai-safety-institute-approach-to-evaluations.
84    Madhumita Murgia, Anna Gross, and Cristina Criddle, “World’s Biggest AI Tech Companies Push UK over Safety Tests,” Financial Times, February 7, 2024, https://www.ft.com/content/105ef217-9cb2-4bd2-b843-823f79256a0e.
85    Dan Milmo, “AI Safeguards Can Easily Be Broken, UK Safety Institute Finds,” Guardian, February 9, 2024, https://www.theguardian.com/technology/2024/feb/09/ai-safeguards-can-easily-be-broken-uk-safety-institute-finds; Gross and Parker, “UK’s AI Bill to Focus on ChatGPT-Style Models.”
86     “AI Foundation Models Review: Short Version,” Competition and Markets Authority, September 18, 2023, https://assets.publishing.service.gov.uk/media/65045590dec5be000dc35f77/Short_Report_PDFA.pdf; Sarah Cardell, “Opening Remarks at the American Bar Association (ABA) Chair’s Showcase on AI Foundation Models,” Government of the United Kingdom, April 10, 2024, https://www.gov.uk/government/speeches/opening-remarks-at-the-american-bar-association-aba-chairs-showcase-on-ai-foundation-models. The CMA is known to be looking at Microsoft’s partnership with OpenAI and has recently opened a “Phase 1” investigation into Amazon’s recent $4-billion investment in Anthropic to assess whether the deal may harm competition. Ryan Browne, “Amazon’s $4 Billion Investment in AI Firm Anthropic Faces UK Merger Investigation,” CNBC, August 8, 2024, https://www.cnbc.com/2024/08/08/amazons-investment-in-ai-firm-anthropic-faces-uk-merger-investigation.html.
87    “AI Foundation Models Update Paper,” Competition and Markets Authority, 2024 https://www.gov.uk/government/publications/ai-foundation-models-update-paper.
88    Meredith Broadbent, “UK Digital Markets, Competition and Consumers Bill: Extraterritorial Regulation Affecting the Tech Investment Climate,” Center for Strategic and International Studies, March 4, 2024, https://www.csis.org/analysis/uk-digital-markets-competition-and-consumers-bill-extraterritorial-regulation-affecting.
89    “A Pro-Innovation Approach to AI Regulation.”
90    “Defence Artificial Intelligence Strategy,” Government of the United Kingdom, June 15, 2022, https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy;  “Ambitious, Safe, Responsible: Our Approach to the Delivery of AI-Enabled Capability in Defence,” Government of the United Kingdom, June 15, 2022, https://www.gov.uk/government/publications/ambitious-safe-responsible-our-approach-to-the-delivery-of-ai-enabled-capability-in-defence/ambitious-safe-responsible-our-approach-to-the-delivery-of-ai-enabled-capability-in-defence.
91    GCHQ is the UK’s signal intelligence agency.
92    “Pioneering a New National Security: The Ethics of Artificial Intelligence at GCHQ,” Government of the United Kingdom, February 24, 2021, https://www.gchq.gov.uk/artificial-intelligence/index.html.
93    “Technology Advisory Panel—IPCO,” Investigatory Powers Commissioner, 2021, https://www.ipco.org.uk/who-we-are/technology-advisory-panel.
94    The five principles are: human centricity; responsibility; understanding; bias and harm mitigation; and reliability.
95    “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023,” Government of the United Kingdom, November 1, 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
96    Thomas Macaulay, “World-First AI Safety Deal Exposes Agenda Set in Silicon Valley, Critics Say,” Next Web, November 2, 2023, https://thenextweb.com/news/ai-safety-summit-bletchley-declaration-concerns.
97    Sean Ó hÉigeartaigh, “Comment on the Bletchley Declaration,” Centre for the Study of Existential Risk, University of Cambridge, November 1, 2024, https://www.cser.ac.uk/news/comment-bletchley-declaration/.
98    Yeong Zee Kin, “Singapore’s Model Framework Balances Innovation and Trust in AI,” Organisation for Economic Co-operation and Development, June 24, 2020, https://oecd.ai/en/wonk/singapores-model-framework-to-balance-innovation-and-trust-in-ai.
99    Kayla Goode, Heeu Millie Kim, and Melissa Deng, “Examining Singapore’s AI Progress,” Center for Security and Emerging Technology, March 2023, https://cset.georgetown.edu/publication/examining-singapores-ai-progress.
100    “National AI Strategy,” Government of Singapore, 2019, https://www.smartnation.gov.sg/nais; Yin Ming Ho, “Singapore’s National Strategy in the Global Race for AI,” Regional Programme Political Dialogue Asia, February 26, 2024, https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai.
101    “Model AI Governance Framework Second Edition,” Personal Data Protection Commission of Singapore, January 21, 2020, https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf.
102    “Singapore’s Approach to AI Governance,” Personal Data Protection Commission, last visited January 11, 2025, https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework.
103    “Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems,” Personal Data Protection Commission, last visited January 11, 2025, https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems.
104     “AI Verify Foundation,” AI Verify Foundation, January 9, 2025, https://aiverifyfoundation.sg/ai-verify-foundation.
105    Marcus Evans, et al., “Singapore Contributes to the Development of Accessible AI Testing and Accountability Methodology with the Launch of the AI Verify Foundation and AI Verify Testing Tool,” Data Protection Report, June 15, 2023, https://www.dataprotectionreport.com/2023/06/singapore-contributes-to-the-development-of-accessible-ai-testing-and-accountability-methodology-with-the-launch-of-the-ai-verify-foundation-and-ai-verify-testing-tool.
106    Yeong Zee Kin, “Singapore’s A.I.Verify Builds Trust through Transparency,” Organisation for Economic Co-operation and Development, August 16, 2022, https://oecd.ai/en/wonk/singapore-ai-verify.
107    “What Is AI Verify?” AI Verify Foundation, last visited January 11, 2025, https://aiverifyfoundation.sg/what-is-ai-verify.
108    “Model AI Governance Framework for Generative AI,” AI Verify Foundation, May 30, 2024, https://aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf.
109    Bryan Tan, “Singapore Proposes Framework for Generative AI,” Reed Smith, January 24, 2024, https://www.reedsmith.com/en/perspectives/2024/01/singapore-proposes-framework-for-generative-ai.
110    The phrase “national security” appears only once in the Generative AI proposal and not at all in the NAIS 2.0.
111    Germany established its Cyber and Information Domain Service in 2016, but it was not upgraded to a separate military service until 2024. “Establishment of the Digital and Intelligence Service: A Significant Milestone for the Next Generation SAF,” Government of Singapore, October 28, 2022, https://www.mindef.gov.sg/news-and-events/latest-releases/28oct22_nr2.
112    Mike Yeo, “Singapore Unveils New Cyber-Focused Military Service,” C4ISRNet, November 2, 2022, https://www.c4isrnet.com/cyber/2022/11/02/singapore-unveils-new-cyber-focused-military-service.
113    “Fact Sheet: The Digital and Intelligence Service,” Singapore Ministry of Defence, October 28, 2022, https://www.mindef.gov.sg/news-and-events/latest-releases/28oct22_fs.
114    “Fact Sheet: Updates to the Establishment of the Digital and Intelligence Service,” Singapore Ministry of Defence, June 30, 2022, https://www.mindef.gov.sg/news-and-events/latest-releases/30jun22_fs2.
115     “How Singapore’s Defence Tech Uses Artificial Intelligence and Digital Twins,” Singapore Defence Science and Technology Agency, November 19, 2021, https://www.dsta.gov.sg/whats-on/spotlight/how-singapore-s-defence-tech-uses-artificial-intelligence-and-digital-twins; Ridzwan Rahmat, “Singapore Validates Enhanced AI-Infused Combat System at US Wargames,” Janes, September 22, 2023, https://www.janes.com/defence-news/news-detail/singapore-validates-enhanced-ai-infused-combat-system-at-us-wargames.
116    David Hutt, “AI Regulations: What Can the EU Learn from Asia?” Deutsche Welle, August 2, 2024, https://www.dw.com/en/ai-regulations-what-can-the-eu-learn-from-asia/a-68203709
117    Sheila Chiang, “ASEAN Launches Guide for Governing AI, but Experts Say There Are Challenges,” CNBC, February 2, 2024, https://www.cnbc.com/2024/02/02/asean-launches-guide-for-governing-ai-but-experts-say-there-are-challenges.html.
118    Eunice Lim, “Global Steps to Build Trust: ASEAN’s New Guide to AI Governance and Ethics,” Workday Blog, February 9, 2024, https://blog.workday.com/en-hk/2024/global-steps-build-trust-aseans-new-guide-ai-governance-ethics.html.
119    “The OECD Artificial Intelligence (AI) Principles,” Organisation for Economic Co-operation and Development, 2019, https://oecd.ai/en/ai-principles.
120    The five topic areas are: inclusive growth and sustainable development; human-centered values and fairness; transparency and explainability; robustness, security, and safety; and, accountability.
121    “About GPAI,” Global Partnership on Artificial Intelligence, 2020, https://gpai.ai/about.
122     “Responsible AI Working Group Report,” Organisation for Economic Co-operation and Development, December 2023, https://gpai.ai/projects/responsible-ai/Responsible%20AI%20WG%20Report%202023.pdf; “Data Governance Working Group Report,” Global Partnership on Artificial Intelligence, December 2023, https://gpai.ai/projects/data-governance/Data%20Governance%20WG%20Report%202023.pdf.
123    “OECD Launches Pilot to Monitor Application of G7 Code of Conduct on Advanced AI Development,” Organisation for Economic Co-operation and Development, July 22, 2024, https://www.oecd.org/en/about/news/press-releases/2024/07/oecd-launches-pilot-to-monitor-application-of-g7-code-of-conduct-on-advanced-ai-development.html.
124    “G7 Leaders’ Statement on the Hiroshima AI Process,” European Commission, October 30, 2023, https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process.
125    Hiroki Habuka, “The Path to Trustworthy AI: G7 Outcomes and Implications for Global AI Governance,” Center for Strategic and International Studies, June 6, 2023, https://www.csis.org/analysis/path-trustworthy-ai-g7-outcomes-and-implications-global-ai-governance.
126    Gregory C. Allen and Georgia Adamson, “Advancing the Hiroshima AI Process Code of Conduct under the 2024 Italian G7 Presidency: Timeline and Recommendations,” Center for Strategic and International Studies, March 27, 2024, https://www.csis.org/analysis/advancing-hiroshima-ai-process-code-conduct-under-2024-italian-g7-presidency-timeline-and.
127    Habuka, “The Path to Trustworthy AI: G7 Outcomes and Implications for Global AI Governance.”
128    Peter J. Schildkraut, “The Illusion of International Consensus—What the G7 Code of Conduct Means for Global AI Compliance Programs,” Arnold & Porter, January 18, 2024, https://www.arnoldporter.com/en/perspectives/publications/2024/01/what-the-g7-code-of-conduct-means-for-global-ai-compliance.
129    “Ministerial Declaration—G7 Industry, Technology, and Digital Ministerial Meeting,” Group of Seven, 2024, https://www.g7italy.it/en/eventi/industry-tech-and-digital/.
130    Joe Jones, “UK-US Data Bridge Becomes Law, Takes Effect 12 Oct.,” International Association of Privacy Professionals, August 21, 2023, https://iapp.org/news/a/uk-u-s-data-bridge-becomes-law-takes-effect-12-october; Camille Ford, “The EU-US Data Privacy Framework Is a Sitting Duck. PETs Might Be the Solution,” Centre for European Policy Studies, February 23, 2024, https://www.ceps.eu/the-eu-us-data-privacy-framework-is-a-sitting-duck-pets-might-be-the-solution.
131    “Ethics of Artificial Intelligence,” UNESCO, 2024, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics; “Global AI Ethics and Governance Observatory,” UNESCO, 2021, https://www.unesco.org/ethics-ai/en.
132    “Governing AI for Humanity,” United Nations, September 19, 2024, https://www.un.org/Sites/Un2.Un.org/Files/Governing_ai_for_humanity_final_report_en.pdf.
133    Tess Buckley, “Governing AI for Humanity: UN Report Proposes Global Framework for AI Oversight,” TechUK, September 20, 2024, https://www.techuk.org/resource/governing-ai-for-humanity-un-report-proposes-global-framework-for-ai-oversight.html; Alexander Amato-Cravero, “UN Releases Its Final Report on ‘Governing AI for Humanity,’” Herbert Smith Freehills, October 8, 2024, https://www.herbertsmithfreehills.com/notes/tmt/2024-posts/UN-releases-its-final-report-on–Governing-AI-for-Humanity-.
134    “General Assembly Adopts Landmark Resolution on Artificial Intelligence,” United Nations, March 21, 2024, https://news.un.org/en/story/2024/03/1147831.
135    “Enhancing International Cooperation on Capacity-Building of Artificial Intelligence,” United Nations, June 25, 2024, https://documents.un.org/doc/undoc/ltd/n24/183/80/pdf/n2418380.pdf.
136    Edith Lederer, “UN Adopts Chinese Resolution with US Support on Closing the Gap in Access to Artificial Intelligence,” Associated Press, July 2, 2024, https://apnews.com/article/un-china-us-artificial-intelligence-access-resolution-56c559be7011693390233a7bafb562d1.
137    “Artificial Intelligence: High-Level Briefing,” Security Council Report, December 18, 2024, https://www.securitycouncilreport.org/whatsinblue/2024/12/artificial-intelligence-high-level-briefing.php.
138    Linda Thomas-Greenfield, “Remarks by Ambassador Thomas-Greenfield at the UN Security Council Stakeout Following the Adoption of a UNGA Resolution on Artificial Intelligence,” United States Mission to the United Nations, March 21, 2024, https://usun.usmission.gov/remarks-by-ambassador-thomas-greenfield-at-the-un-security-council-stakeout-following-the-adoption-of-a-unga-resolution-on-artificial-intelligence.
139    “July 2023 Monthly Forecast: Security Council Report,” Security Council Report, July 2, 2023, https://www.securitycouncilreport.org/monthly-forecast/2023-07/artificial-intelligence.php.
140    Michelle Nichols, “UN Security Council Meets for First Time on AI Risks,” Reuters, July 18, 2023, https://www.reuters.com/technology/un-security-council-meets-first-time-ai-risks-2023-07-18.
141    “Statement by the President of the Security Council,” United Nations, September 21, 2024, https://documents.un.org/doc/undoc/gen/n24/307/20/pdf/n2430720.pdf; “July 2023 Monthly Forecast: Security Council Report.”
142    “Summary of the NATO Artificial Intelligence Strategy,” NATO, October 22, 2021, https://www.nato.int/cps/en/natohq/official_texts_187617.htm.
143    Simona Soare, “Algorithmic Power, NATO and Artificial Intelligence,” Military Balance Blog, November 19, 2021, https://www.iiss.org/ja-JP/online-analysis/military-balance/2021/11/algorithmic-power-nato-and-artificial-intelligence.
144    “NATO Allies Take Further Steps Towards Responsible Use of AI, Data, Autonomy and Digital Transformation,” NATO, October 13, 2022, https://www.nato.int/cps/en/natohq/news_208342.htm.
145    “NATO Starts Work on Artificial Intelligence Certification Standard,” NATO, February 7, 2023, https://www.nato.int/cps/en/natohq/news_211498.htm.
146    Daniel Fata, “NATO’s Evolving Role in Developing AI Policy,” Center for Strategic and International Studies, November 8, 2022, https://www.csis.org/analysis/natos-evolving-role-developing-ai-policy.
147    Maggie Gray and Amy Ertan, “Artificial Intelligence and Autonomy in the Military: An Overview of NATO Member States’ Strategies and Deployment,” NATO Cooperative Cyber Defence Centre of Excellence, NATO, January 2021, https://ccdcoe.org/library/publications/artificial-intelligence-and-autonomy-in-the-military-an-overview-of-nato-member-states-strategies-and-deployment.
148    “Standardization,” NATO, October 14, 2022, https://www.nato.int/cps/en/natohq/topics_69269.htm.

The post Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage appeared first on Atlantic Council.

]]>
Why tariffs on AI hardware could undermine US competitiveness https://www.atlanticcouncil.org/blogs/new-atlanticist/why-tariffs-on-ai-hardware-could-undermine-us-competitiveness/ Sun, 15 Jun 2025 11:00:00 +0000 https://www.atlanticcouncil.org/?p=852674 Tariffs targeted at China have their uses in the US-China tech competition, but they shouldn’t be applied haphazardly to US allies and partners.

The post Why tariffs on AI hardware could undermine US competitiveness appeared first on Atlantic Council.

]]>
How can the United States maximize its international competitiveness in the development of artificial intelligence (AI)? To begin with, it can take additional steps to strengthen domestic chip fabrication capacity and friend-shore supply chains. Washington could also tighten export controls on some semiconductors and other technologies. But imposing new tariffs on essential dual-use, militarily relevant AI components from friendly partners risks having the opposite effect.

The Trump administration has launched an investigation under Section 232 of the Trade Expansion Act into the impact of semiconductor imports on national security, a step toward imposing tariffs. But if it moves ahead with tariffs on all semiconductor imports, the United States would raise hardware costs for US AI firms, punish important partners such as Mexico and Taiwan, and lower prices for Chinese competitors. Tariffs targeted at China have their uses in the US-China tech competition, but they shouldn’t be applied haphazardly to US allies and partners.

Semiconductors and dual-use imports

Today, the United States and like-minded allies and partners are competing with China in AI, or what AI entrepreneur Dario Amodei and former US Deputy National Security Advisor Matt Pottinger have described as possibly “the most powerful and strategic technology in history.” AI-related imports enable US AI companies to access cost-effective inputs and continue to outpace Chinese competitors. Since AI is an emergent technology with such large potential utility and consequences, it would be a mistake to allow China to define the rules of engagement.

Components are a key cost driver for training AI models. Key AI-related component imports include processing units, such as graphics processing units (GPUs) and central processing units (CPUs), and printed circuit assemblies (PCAs), all of which could be targeted by Section 232 tariffs. GPUS are one of the most popular computing technologies to run AI models due to their ability to train massive models and speed up inference at scale; they’re also used on board autonomous vehicles. Similarly, PCAs are critical because they house and interconnect critical components like GPUs, CPUs, memory, and networking chips inside servers and data center infrastructure. AI is a critical source of demand, although chips and printed circuits are also used by a variety of non-AI applications, including cars, computers, washing machines, routers, etc. Imports of processing units and PCAs have surged in recent months due to both AI-driven demand and companies seeking to get out ahead of tariffs.

PCA unit imports have more than quintupled since 2021, with no productivity changes to explain the jump—pointing to greater hardware needs. Consequently, if PCA prices rise due to tariffs, the US AI buildout could slow.

Two economies are prominent partners of dual-use technology, with both military and civilian applications, for the US AI sector. The first, Taiwan, not only ships leading-edge GPUs to the United States, but the Taiwan Semiconductor Manufacturing Company has committed to investing a cumulative $165 billion in the US tech sector. The second, Mexico, is the largest single aggregate supplier to the United States of GPUs and CPUs, as well as PCAs, by value. Tariffs on semiconductor inputs would punish US partners while limiting the access of US firms to the global market.

Indeed, hardware is a significant cost driver for US AI. Researchers for Epoch AI and Stanford University have found that AI accelerator chips and other server component costs comprise about half of all costs for training and experiments of machine language models. Moreover, building AI models is highly capital intensive: hyperscalers committed $200 billion in twelve-month trailing capital expenditures in 2024; Morgan Stanley projects hyperscaler capital expenditures could reach as high as $300 billion in 2025. Significantly, since hardware acquisition costs are “one to two orders of magnitude higher than amortized costs,” higher prices via tariffs could deter new AI entrants, slow adoption, and stymie dynamism. 

Unintended tariff consequences on the Chinese tech sector

While heavy tariffs would harm the US tech sector, they are unlikely to impede China in the AI race. In fact, tariffs could indirectly encourage tech transfer to China by pushing other countries, especially in Southeast Asia, to work more closely with Beijing. In mid-April, after US President Donald Trump’s announcement of global “reciprocal” tariffs and the subsequent ninety-day pause, Chinese President Xi Jinping visited Vietnam, Malaysia, and Cambodia, saying he would “safeguard the multilateral trading system.” China left these meetings with several memorandums of understanding on investment and trade, including a call to increase AI cooperation with Malaysia.

The mention of AI cooperation was striking and potentially significant. Export controls of US-designed semiconductors to China have been leaky: There is some evidence of GPU transshipment to China through Southeast Asia, notably Malaysia. The Wall Street Journal also reports that Chinese engineers are using Malaysian data centers to train AI models. Meanwhile, the export of GPUs and other computer hardware containing semiconductors from Taiwan to Malaysia reached $307 million in April (more than half the value of the same exports for all of 2024). Remarkably, Taiwan’s GPU and CPU exports to countries in the Association of Southeast Asian Nations (ASEAN) hit a record high in April—surpassing exports to the United States by value for the first time on record.

The increase in Taiwan’s semiconductor exports to ASEAN does not, by itself, demonstrate transshipment to China: Malaysia is becoming an increasingly popular spot for international data centers because of the country’s cheap real estate and its proximity to Singapore. It’s possible that the GPUs and CPUs were consumed in the domestic market. Still, it’s worth noting that recent data center entrants in Malaysia include Chinese firms. If US tariffs make countries like Malaysia more willing to work with China, that could increase the risk of US export controls being violated.

 If not tariffs, then what?

Given that non-China tariffs appear likely to harm the US tech sector and could strengthen Chinese tech firms via technology leakage, US policymakers should consider alternative tools.

The United States has been able to slow the Chinese tech sector by imposing a series of bipartisan export controls that limit Beijing’s access to high-end semiconductors. Last month, the Bureau of Industry and Security rescinded the AI Diffusion Rule, which strengthened chip-related exports. Some criticize the framework for casting too wide of a net, while others hold that export controls are a crucial economic statecraft tool for protecting US national security interests and preventing technological acquisition by strategic rivals.

Export controls are vital and necessary, but they are not a silver bullet. To outcompete China, the United States must strengthen its own capabilities, including by incentivizing manufacturing and know-how in semiconductors and other strategic technologies. This is precisely the rationale for the bipartisan CHIPS and Science Act, which was signed into law in August 2022. Tariffs alone do not provide enough support to incentivize foreign investment and domestic capacity in chip technologies. While Congress and the White House should make adjustments to the CHIPS and Science Act where appropriate, the program’s overall aims should be maintained.

No one should be unclear on the stakes, amid the global race toward artificial general intelligence (AGI)—or artificial intelligence equal to or exceeding human capabilities. Whether the race is a sprint, a marathon, or something else entirely, the technology’s productivity gains will likely prove sizable. AGI also holds obvious potential risks, but it is in the United States’ best interest to be at the forefront of setting standards and developing the regulatory environment. Accordingly, it is important for the United States to maximize its chances of obtaining this technology and integrating it before China does by securing vital, high-end semiconductors ahead of its rival.


Joseph Webster is a senior fellow at the Atlantic Council’s Global Energy Center and the Indo-Pacific Security Initiative. He also edits the independent China-Russia Report.

Jessie Yin is an assistant director at the Atlantic Council’s GeoEconomics Center. This article reflects their own personal opinions.

The post Why tariffs on AI hardware could undermine US competitiveness appeared first on Atlantic Council.

]]>
G7 leaders have the opportunity to strengthen digital resilience. Here’s how they can seize it. https://www.atlanticcouncil.org/blogs/geotech-cues/g7-leaders-have-the-opportunity-to-strengthen-digital-resilience-heres-how-they-can-seize-it/ Fri, 06 Jun 2025 17:10:35 +0000 https://www.atlanticcouncil.org/?p=852065 At the upcoming Group of Seven Leaders’ Summit in Canada, member state leaders should advance a coherent, shared framework for digital resilience policy.

The post G7 leaders have the opportunity to strengthen digital resilience. Here’s how they can seize it. appeared first on Atlantic Council.

]]>
The 2025 Group of Seven (G7) Leaders’ Summit in Kananaskis, Alberta, Canada, on June 15-17 will take place amid a growing recognition of the importance of digital resilience. This is especially apparent in Canada, the summit’s host country and current G7 president. Following his election win, Canadian Prime Minister Mark Carney announced the creation of a new Ministry of Artificial Intelligence and Digital Innovation. This bold step positions Canada to champion a digital resilience agenda at the summit that unites security, economic growth, and technological competitiveness while strengthening the resilience of its partners and allies.

The G7 must seize this opportunity to advance a coherent, shared framework for digital policy, one that is grounded in trust, reinforced by standards, and aligned with democratic values. To do so, it can build on some of the insights from the Business Seven (B7), the official business engagement group of the G7. The theme of this year’s B7 Summit, which was held from May 14 to May 16, in Ottawa, Canada, was “Bolstering Economic Security and Resiliency.” The selection of this theme emphasized the importance of defending against threats and enhancing the ability of societies, governments, and businesses to adapt and recover.

In the spirit of that theme, the Atlantic Council’s GeoTech Center, in partnership with the Cyber Statecraft Initiative and the Europe Center, convened a private breakfast discussion alongside the B7 in Ottawa on May 15. The roundtable brought together government officials, business leaders, and civil society representatives to discuss how digital resilience can be strengthened within the G7 framework. The participants laid out foundational principles and practical approaches to building digital resilience that support economic security and long-term competitiveness. As G7 leaders gather for the summit in Kananaskis later this month, they should consider these insights on how its member states can work together to bolster their digital resilience.

1. Develop a common language for shared goals on digital sovereignty

When developing a common framework, definitions (or taxonomy) are critical. Participants emphasized that shared vocabulary is a prerequisite for meaningful cooperation. Discrepancies in how countries define concepts such as digital sovereignty can lead to fundamental misunderstandings in critical areas such as risk, which creates friction and confusion.

For example, a G7 country might frame sovereignty in terms of national control over infrastructure while another country, such as China, defines it as regulating the digital information environment. In that case, this misalignment will hinder cooperation from the outset. Specifying precise definitions of each government’s goals, including “trust,” “resilience,” and “digital sovereignty,” would enable governments and industry to align on priorities and respond more effectively to emerging standards. This definitional clarity is crucial for policymaking and a prerequisite for compliance, implementation, and interoperability across borders.

2. Build on existing multilateral and regional frameworks

Participants stressed the importance of building on existing progress toward digital resilience, both in and out of the G7, rather than discarding it in pursuit of novelty. The G7 and its partners already possess a strong foundation of digital policy initiatives. Key milestones such as the Hiroshima AI Process, launched under Japan’s 2023 G7 presidency, established International Guiding Principles and an International Code of Conduct for the development and use of artificial intelligence (AI) systems, which included frontier models. Prior to the Hiroshima AI Process, several consecutive G7 Summits committed to developing the data free flow with trust framework, which prioritizes enabling the free flow of data across borders while protecting privacy, national security, and intellectual property.

Beyond the G7, participants cited European Union (EU) partnerships as examples of forward-leaning policy environments that balance innovation with safeguards. These included the EU AI continent action plan, which aims to leverage the talent and research of European industries to strengthen digital competitiveness and bolster economic growth, as well as Horizon Europe, the EU’s primary financial program for research and innovation.

With these partnership frameworks already in place, G7 leaders should build on existing work and avoid seeking to design unique solutions that may become time-consuming—particularly when it comes to gaining political buy-in. Even in areas like AI and the use of data, where policymakers have observed rapid changes since last year’s summit, the B7 discussion participants emphasized that governments can leverage work they’ve already completed in designing and implementing existing standards. If prior technical standards and regulations are inapplicable or insufficient, policymakers can still learn lessons from an in-depth assessment, including by taking note of where they’ve fallen short of their goals.

3. Start new initiatives with small working groups and pilot projects  

Ensuring digital resilience requires managing inevitable trade-offs between national security, economic vitality, and open digital ecosystems. As one participant remarked, “the digital economy is the economy,” so policies shaping cyberspace must consider both national security and economic impacts. The G7 provides a platform for frank discussions among allies and partners about how to get these trade-offs right. But waiting for buy-in from all like-minded partners risks missed opportunities in the short term.

Participants noted that by starting with smaller forums, policymakers can build consensus that can lead to real progress. Pilot projects and working groups among smaller clusters of G7 countries could build momentum and inform scalable solutions. Participants emphasized that despite the contentious nature of some of the issues surrounding digital resilience, such as protectionism and market fragmentation, G7 governments are operating with a shared set of values. These values can motivate collaboration across the G7 on the many areas of common ground they already share, but they can also provide the basis for projects among smaller groups within the G7 to get new ideas off the ground.

A pivotal summit for digital resilience

As G7 leaders meet in Kananaskis and work toward a common framework that balances digital security and economic growth, a few key lessons can be garnered from this B7 meeting. G7 member states should prioritize developing a common taxonomy and building on the progress made on digital resilience both inside and outside the G7, all while remaining responsive to shifting geopolitical dynamics.

Disagreements among member states should be viewed not as a barrier, but as evidence of a maturing policy landscape. Constructive tension can drive refinement so long as partners are clear about their priorities. The G7’s unique value lies in its ability to forge alignment among diverse actors. False consensus only delays progress. It will take transparency, specificity, and trust to move the digital resilience agenda forward.


Sara Ann Brackett is an assistant director at the Atlantic Council’s Cyber Statecraft Initiative.

Coley Felt is an assistant director at the Atlantic Council’s GeoTech Center.

Raul Brens Jr. is the acting senior director of the Atlantic Council’s GeoTech Center.

Further Reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post G7 leaders have the opportunity to strengthen digital resilience. Here’s how they can seize it. appeared first on Atlantic Council.

]]>
Atlantic Council, XRG, and MGX to host AI and energy summit on June 16 https://www.atlanticcouncil.org/news/press-releases/atlantic-council-xrg-and-mgx-to-host-ai-and-energy-summit-on-june-16/ Wed, 04 Jun 2025 17:00:00 +0000 https://www.atlanticcouncil.org/?p=851645 Harnessing energy for AI’s power surge as leaders in AI, energy, policy, and finance unite to shape the future of AI infrastructure WASHINGTON, DC — June 4, 2025 —   The Atlantic Council, XRG, and MGX, will convene the ENACT summit in Washington, DC, on June 16, bringing together global leaders from the energy, technology, and […]

The post Atlantic Council, XRG, and MGX to host AI and energy summit on June 16 appeared first on Atlantic Council.

]]>

Harnessing energy for AI’s power surge as leaders in AI, energy, policy, and finance unite to shape the future of AI infrastructure

WASHINGTON, DC June 4, 2025    The Atlantic Council, XRG, and MGX, will convene the ENACT summit in Washington, DC, on June 16, bringing together global leaders from the energy, technology, and finance sectors to explore the intersection of artificial intelligence, energy systems, and investment.

A slate of high-level leaders — including U.S. Secretary of Energy Chris Wright and the United Arab Emirates’ Minister of Industry and Advanced Technology Dr. Sultan Ahmed Al Jaber — will help shape the conversation on powering the future of AI.

Launched with support from XRG, the UAE’s global energy investment company, and MGX, the UAE’s leading AI and advanced technology investor, ENACT (Energy and Action) is a future-focused platform designed to advance practical solutions to how the energy, tech, and finance sectors can power the future of global AI for a pro-growth world.

“AI is supercharging progress, but in doing so, it is also supercharging energy demand. By convening leaders from energy, technology, policy and finance, ENACT will connect the dots between sectors to help drive coordinated solutions that ensure that the era of AI has the power it needs. This gathering will also seek to unlock AI’s potential to enhance energy efficiency and abundance that represent the bedrock of sustainable growth and global prosperity,” said Al Jaber, who is also the managing director and group CEO of ADNOC and executive chairman of XRG.

“Artificial intelligence is rapidly becoming the foundation of modern economies, driving surging demand for both digital and physical infrastructure. Its continued advancement depends on reliable, scalable energy — a critical enabler of global AI expansion. We must collectively invest in the core systems — power generation, advanced grid technologies, and high-efficiency compute — to ensure AI growth is sustainable, secure, and accessible worldwide. Partnering with XRG and the Atlantic Council at ENACT underscores our commitment to building the infrastructure that AI’s future requires,” said Ahmed Yahia, CEO and managing director of MGX.

The summit will take place one day ahead of the Atlantic Council’s ninth Global Energy Forum, held June 17-18 in Washington, DC.  These back-to-back summits will foster international cooperation at the nexus of energy, technology, and geopolitics.

“There is an unprecedented opportunity to leverage artificial intelligence as a tool for net-growth as we navigate the challenges of a transforming energy system,” said Frederick Kempe, president and CEO of the Atlantic Council. “We’re excited to co-host ENACT with XRG to establish an action agenda to meet this challenge, by convening the right energy, tech and policy leaders to pioneer the path forward.”

This ENACT convening builds on the momentum of the ENACT Majlis in Abu Dhabi, where more than 80 global leaders laid the groundwork for pragmatic action and positive energy solutions.

More information about ENACT is available by contacting Katie Kenney, Global Energy Center Deputy Director, at KKenney@atlanticcouncil.org. Participants may register for the Atlantic Council Global Energy Forum by visiting our website.

About the Atlantic Council

The Atlantic Council promotes constructive leadership and engagement in international affairs based on the Atlantic community’s central role in meeting global challenges. The Council provides an essential forum for navigating the dramatic economic and political changes defining the twenty-first century by informing and galvanizing its uniquely influential network of global leaders. The Atlantic Council—through the papers it publishes, the ideas it generates, the future leaders it develops, and the communities it builds—shapes policy choices and strategies to create a more free, secure, and prosperous world.

About XRG

XRG is a transformative international energy investment company, focused on lower-carbon energy and chemicals, and headquartered in Abu Dhabi. Wholly owned by ADNOC, XRG has an enterprise value of over $80 billion. Its portfolio includes interests in industry-leading companies that are meeting rapidly increasing global demand for lower carbon energy and the chemicals that are essential building blocks for products central to modern life.


About MGX

MGX is a technology investment company focused on accelerating the development and adoption of AI and advanced technologies through world-leading partnerships in the United Arab Emirates and globally. MGX invests in sectors where AI can deliver value and economic impact at scale, including semiconductors, infrastructure, software, tech-enabled services, life sciences, and physical AI. For more information, visit: https://www.mgx.ae/en

The post Atlantic Council, XRG, and MGX to host AI and energy summit on June 16 appeared first on Atlantic Council.

]]>
Stephen Rodriguez Joins AI+Expo Panel on Government Procurement Reform https://www.atlanticcouncil.org/insight-impact/in-the-news/stephen-rodriguez-joins-aiexpo-panel-on-government-procurement-reform/ Wed, 04 Jun 2025 16:23:08 +0000 https://www.atlanticcouncil.org/?p=851641 On June 3, Stephen Rodriguez, Senior Advisor at Forward Defense and Director of the Commission on Software-Defined Warfare, joined a panel at the AI+Expo to discuss “Reindustrializing America via Government Procurement Reform.” He was joined by Eric Lofgren, Staff Member, U.S. House Armed Services Committee; Scott Friedman, Vice President of Government Affairs at Altana Technologies; […]

The post Stephen Rodriguez Joins AI+Expo Panel on Government Procurement Reform appeared first on Atlantic Council.

]]>

On June 3, Stephen Rodriguez, Senior Advisor at Forward Defense and Director of the Commission on Software-Defined Warfare, joined a panel at the AI+Expo to discuss “Reindustrializing America via Government Procurement Reform.”

He was joined by Eric Lofgren, Staff Member, U.S. House Armed Services Committee; Scott Friedman, Vice President of Government Affairs at Altana Technologies; and Mike Manazir, Vice President, Federal at Hadrian.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Stephen Rodriguez Joins AI+Expo Panel on Government Procurement Reform appeared first on Atlantic Council.

]]>
After Ukraine’s innovative airbase attacks, nowhere in Russia is safe https://www.atlanticcouncil.org/blogs/ukrainealert/after-ukraines-innovative-airbase-attacks-nowhere-in-russia-is-safe/ Tue, 03 Jun 2025 20:55:58 +0000 https://www.atlanticcouncil.org/?p=851460 Ukraine carried out one of the most audacious operations in modern military history on June 1, using swarms of smuggled drones to strike four Russian airbases simultaneously and destroy a significant portion of Putin’s bomber fleet, writes David Kirichenko.

The post After Ukraine’s innovative airbase attacks, nowhere in Russia is safe appeared first on Atlantic Council.

]]>
Ukraine carried out one of the most audacious operations in modern military history on June 1, using swarms of smuggled drones to strike four Russian airbases simultaneously and destroy a significant portion of Putin’s bomber fleet. While the full extent of the damage remains disputed, open source evidence has already confirmed that Russia lost at least ten strategic bombers and possibly many more.

The attack highlighted Ukraine’s innovative use of military technologies and confirmed the country’s status as a world leader in the rapidly evolving art of drone warfare. Crucially, it also underlined Kyiv’s ability to conduct complex offensive operations deep inside Russia. This will force the Kremlin to radically rethink its domestic security stance, which could lead to the diversion of resources away from the invasion of Ukraine.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

According to Ukrainian sources, preparations for Operation Spider’s Web had been underway since late 2023. Ukraine was able to move a series of modified cargo containers into Russia along with more than one hundred first-person view (FPV) drones. The containers were then loaded with the drones and mounted on lorries before being moved into position close to Russian airbases. On Sunday morning, the green light was given and the drones were remotely activated, emerging from their containers to strike nearby Russian bombers.

The bombers targeted in these drone attacks play a key role in Russia’s air war and are regularly used to launch cruise missiles at Ukrainian cities. While Ukraine’s June 1 success will not bring this bombing campaign to an end, it may help save Ukrainian lives by reducing the number of available planes and forcing Russia to disperse its remaining strategic bombers to locations further away from Ukraine.

While any reduction on Russia’s ability to bomb Ukrainian civilians is welcome, the impact of Ukraine’s airbase attacks on the future course of the war is likely to be far more profound. Sunday’s Ukrainian strikes at locations across Russia have transformed the situation on Putin’s home front. Since the onset of Russia’s full-scale invasion more than three years ago, Russians have grown accustomed to viewing the war as something that is taking place far away. That sense of security has now been shattered.

This was not the first time Ukraine has struck deep inside Russia. For much of the war, Ukraine has been using its growing fleet of long-range drones to target Russian military bases and the country’s oil and gas industry. Russian Air Force hubs such as the Engels airbase in Saratov Oblast have been hit multiple times.

Ukraine’s attacks have gained momentum as the country’s long-range drone fleet has evolved and as Kyiv has developed its own missile capabilities. This mounting proficiency has not gone unnoticed internationally. Indeed, China reportedly asked Ukraine to refrain from attacking Moscow during the recent Victory Day parade on May 9, as Beijing was apparently unsure whether the Russians themselves could provide sufficient protection for the visiting Chinese leader.

Sunday’s operation represents a new stage in Ukraine’s efforts to bring Putin’s invasion home to Russia. By deploying large numbers of drones surreptitiously across the Russian Federation and activating them remotely, Ukraine demonstrated an ability to strike anywhere without warning. The consequences of this are potentially far-reaching. Russia must now increase security at every single military base, military-industrial site, command center, and transport hub throughout the country.

In addition to ramping up defensive measures around military infrastructure, Russia must also introduce further checks at the country’s borders and closely monitor all activity along endless highways stretching from Europe’s eastern frontier to the Pacific Ocean. This is a logistical nightmare. For example, thanks to Ukraine’s attack, all cargo containers must now be treated with suspicion. There are already reports of bottlenecks emerging at locations across Russia as alarmed officials inspect lorries in the hunt for more Ukrainian drones.

Given the colossal size of the Russian Federation, addressing the threat posed by Ukraine’s Trojan Horse tactics is a truly Herculean task. Russia’s vastness has traditionally been viewed as one of the country’s greatest strengths. The new form of warfare being pioneered by Ukraine could now turn this size into a major weakness. US President Donald Trump has repeatedly stated that Ukraine does not “have any cards” in its war with Russia, but Ukrainian President Volodymyr Zelenskyy may just have played the ace of drones.

David Kirichenko is an associate research fellow at the Henry Jackson Society.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values, and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia, and Central Asia in the East.

Follow us on social media
and support our work

The post After Ukraine’s innovative airbase attacks, nowhere in Russia is safe appeared first on Atlantic Council.

]]>
Hyperwar, artificial intelligence, and Homo sapiens https://www.atlanticcouncil.org/content-series/ac-turkey-defense-journal/hyperwar-artificial-intelligence-and-homo-sapiens/ Mon, 02 Jun 2025 14:00:00 +0000 https://www.atlanticcouncil.org/?p=847083 With the rise of autonomous weapon systems in distributed battlegrounds, the neuroanatomical outlook of warfare may be evolving into a new reality.

The post Hyperwar, artificial intelligence, and Homo sapiens appeared first on Atlantic Council.

]]>
Rethinking the modern neuroanatomical charts of warfare

According to Napoleon, an army walks on its stomach. War, nonetheless, chiefly revolves around cognitive functions. Take a nineteenth-century Napoleonic artillery officer calculating the range of his guns to the target, for example. The officer’s prefrontal cortex hosts three major components: control, short-term memory, and arithmetic logic. This prefrontal exercise operates on the data provided by two other sources: a premotor-parietal top-down system optimized to update and continuously transform external data into an internal format, and a hippocampal bottom-up system to serve as an access code to memory from previously acquired knowledge or to detect novel information. In other words, an army fights on mathematical military data processing systems of the parietal and prefrontal brain regions. No matter how technological improvements have run extra miles to the present day, this cognitive formulation has not changed even on the margins. A contemporary F-35 pilot, assessing the processed situational data harvested by the aircraft’s AN/AAQ-37 Distributed Aperture System showcased on the helmet-mounted display, uses precisely the same biological decision-making algorithms as the Napoleonic artillery officer posited above—albeit on steroids and with a high-performance computing edge.

Today, mankind stands on the eve of a great change in this oldest cognitive tradition of warfighting. For the first time in military history, parietal and prefrontal brain regions may take a back seat in deciding concepts of operations and concepts of employment, perhaps even strategic planning prior to combat operations, while artificial intelligence will likely assume the lead. With the rise of autonomous weapon systems in distributed battlegrounds, the neuroanatomical outlook of warfare may be evolving into a new reality.

Smart digital algorithms and autonomous robotic warfighters are poised to replace not only the muscles but also the brains of warfare. This can occur because they can replicate electronically what our brains do in the biological realm and thus can overtake us by simply performing better, not differently. Robotics and artificial intelligence mimic the core characteristics of nature. Machine-learning and artificial neural networks are good examples of this mimicry. Our everyday AI features of facial and voice recognition and smart internet search predictions function in the virtual world much as they do in the human brain. Likewise, swarming is not merely a robotic function. Birds, bee colonies, and even bacteria swarm. AI might be “smarter” than humans through faster processing of effective mimicry, and robots similarly may swarm in a more coordinated and agile manner than biological agents.

AI and hyperwar: Data, robots, and satellites

In their 2017 Proceedings article released by the US Naval Institute, US Marine Corps General John Allen and high-technology entrepreneur Amir Husain described “hyperwar” as an emerging type of armed conflict that significantly reduces human decision-making. In the new type of wars, the authors argued, Homo sapiens’cognitive function of decision-making will nearly disappear from the OODA loop (observe, orient, decide, act). Autonomous swarms of robotic warfare systems, high-speed networks married to machine-learning algorithms, AI-enabled cyber warfare tools, and miniaturized high-powered computing are likely to assume the lead roles in fighting wars. More importantly, humans might be removed from operational planning, with their role to be confined to merely very high-level and broad input. The rise of hyperwars will essentially bring groundbreaking combinations of emerging technologies, much as the German blitzkrieg combined in novel ways fast armor, air support, and radio communications. General Allen and Husain concluded that the gap between winners and losers would very likely resemble that of Saddam’s Iraqi Army facing the “second offset” technologies of electronic warfare, precision-guided munitions, and stealth platforms. 

The Russo-Ukraine War serves as a battlefield laboratory to test possible elements of the coming hyperwars and the impact of artificial intelligence on conducting and analyzing warfare. First, the integration of satellite imagery intelligence and target and object recognition technologies has provided the Ukrainian military with a very important geospatial intelligence edge in kinetic operations. Second, the Ukrainian intelligence apparatus has resorted to neural networks to run ground social media content and other open-source data to monitor Russian servicemen and weapons systems, then to translate the input into target acquisition information and military intelligence. Third, playing smart with data has also sparked a capability hike in drone warfare. Open-source defense intelligence studies suggest that Ukrainian arms makers used publicly available artificial intelligence models to retrain drone software applications with the real-world data harvested from the conflict. This modified data has then been used to operate the drones themselves. Ukrainian robotic warfare assets have seen a capability boost in precision and targeting with the help of the data-mastering process. In the future, some robotic baselines will likely see a faster and more profound improvement with the new leap in AI and information management. Specific drone warfare systems, such as the American Switchblade and Russian Lancet-3, already have design philosophies that prioritize computer vision to run target identification.

It appears that the zeitgeistis on the side of the hyperwar. After all, digital data has been on a huge and exponential growth trend for at least one decade. In 2013, the world generated 4.4 zettabytes of data—with a zettabyte amounting to 1021 bytes. Estimates from that period forecast 163 zettabytes of global data to be produced in 2025, which was considered a gigantic magnitude. At current rates, the reality this year will be even higher, at 180 zettabytes of data, or even more. The climb in data generation is intertwined with a rise in drone warfare systems proliferation and employment globally, as well as the production of robotic warfare systems. The dual hike in data and robots forms the very basis of hyperwars.

Other areas to monitor are orbital warfare and space warfare systems. Unlike warfighting and maneuver warfare on the planet Earth, the space operational environment presents technical challenges rather than strategic ones. Satellites are very vulnerable to offensive action since their movements are very limited and incur massive technical requirements for even small moves. A recent war-gaming exercise by American space and defense bodies showcase that one way to boost survivability in space warfare is to reposition “bodyguard satellites” to block access to key orbital slots. AI would be a key asset in accomplishing this concept in a preventive way. Being able to process very large data accumulations to detect hostile action patterns invisible to intelligence analysts, AI offers a new early-warning set of capabilities to decision-makers on Earth.

Horses, dogs, and human warfighters

Mankind as a species has long been fighting in cooperation with other members of the animal kingdom. The cavalry, for instance, for centuries leveraged the synergic warfighting mix of the domesticated horse—Equus ferus caballus—and Homo sapiens. Dogs—Canis lupus familiaris—are another example, as the first species domesticated by our kind and thus long-accustomed to fighting at our side. The role of war dogs is not restricted to history books or ceremonies and parades: a Belgian Malinois took part in the US killing of Abu Bakr al-Baghdadi, the founder of the Islamic State in Iraq and al-Sham (ISIS), back in 2019. Another dog of the same breed operated alongside the American Navy Seals in 2011, during Operation Neptune Spear, to kill the mastermind behind the 9/11 terror attacks, al-Qaeda ringleader Osama bin Ladin.

Scientifically speaking, Homosapiens not only befriended horses and dogs—we neuroscientifically altered these domesticated species’ decision-making algorithms through selective breeding. Scientific experiments showcase that domesticated horses have learned to read human cues to adapt their behaviors. War dogs are the product of key manipulations via human intervention across generations of deliberate breeding. Magnetic resonance imaging studies have proven that through selective breeding over centuries, humans have significantly altered the brains of domestic dog lineages to achieve behavioral specialization, such as scent hunting or guard capabilities and tasks.

The advent of AI requires us to accept that human brains, like those of domesticated animals with military utility, have adapted and will continue to adapt in response to neural stimuli. Combat formations, ranging from mechanized divisions to fighter squadrons, function as the musculoskeletal frame of warfare, while the human decision-making system functions as the brains and neurons. Throughout military history, the brain and the limbs interacted with various ways of communications—be it trumpets of military bands ordering a line march or contemporary tactical data links of modern warfare sharing real-time updates between a fifth-generation aircraft and a frigate’s onboard systems. Homo sapienshas been at the very epicenter of the equation no matter what technological leaps have taken place and will adapt in unpredictable ways to being the slower and more marginal element in decision architecture. Drone warfare has not led to autonomous killer robots but to the rise of a new warrior class: drone operators with massive kill rates, seen both in Putin’s invading army and the Ukrainian military. The rise of hyperwars may produce even further change to the human role, though, as the biological brain races to compete with accelerating decision cycles and nonbiological elements that outpace us. Domesticating AI in warfare will prove more challenging than either dogs or horses, and it is not yet clear what would ensue if we were to design servants quicker and more agile than the masters.  

Implications for US-Turkish defense cooperation

The United States and Turkey are not only the two largest militaries within NATO; they have the broadest and most combat-proven drone warfare prowess. Their robotic warfare solutions have been rising quickly in autonomous characteristics and have already reached the human-in-the-loop level in combat operations. In the coming decades, human-out-of-the-loop CONOPS (concepts of operations) will likely emerge for both the US and Turkish militaries. This common feature of defense technology and geopolitics presages a lucrative path for cooperating within the hyperwar environment.

Moreover, Washington and Ankara can enhance their respective collaborations with Ukraine, a nation with the most recent drone warfare experience against the Russian Federation—a direct threat to NATO member states, as officially manifested by the alliance’s incumbent strategic concept. The Ukrainian case extends to utilizing satellite internet connection in the C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance) aspect of robotic warfare, as well as employing private satellite imagery in target acquisition widely.

Kyiv has already developed close defense ties with the United States and Turkey—even taking part in the latter’s drone proliferation, particularly in the engine segment (for example, Baykar’s Kizilelma). Establishing a trilateral lessons-learned mechanism, which would incorporate defense industries alongside government agencies, would boost such an effort.

Overall, hyperwar seems to be paradigm for future warfare. The United States and Turkey make it possible, and through collaboration perhaps likely, that NATO will retain the upper hand in the hyperwars of the future.


Can Kasapoglu is a non-resident senior fellow at Hudson Institute. Follow him on X @ckasapoglu1.

Explore other issues

The Atlantic Council Turkey Program aims to promote and strengthen transatlantic engagement with the region by providing a high-level forum and pursuing programming to address the most important issues on energy, economics, security, and defense.

The post Hyperwar, artificial intelligence, and Homo sapiens appeared first on Atlantic Council.

]]>
Trump can cement his Middle East successes by calling Putin’s bluff https://www.atlanticcouncil.org/content-series/inflection-points/trump-can-cement-his-middle-east-successes-by-calling-putins-bluff/ Thu, 15 May 2025 22:55:04 +0000 https://www.atlanticcouncil.org/?p=847299 After lifting Syria sanctions and semiconductor restrictions, Trump has a historic opportunity when it comes to Russia's war in Ukraine.

The post Trump can cement his Middle East successes by calling Putin’s bluff appeared first on Atlantic Council.

]]>
This week has been vintage Donald Trump: disruptive, transactional, and unafraid to defy convention. From a geopolitical standpoint, the US president’s trip to the Middle East could prove to be one of the most significant of his two terms in office. That depends, however, on whether Trump now follows up with a decisive move against Russia’s Vladimir Putin.

Here’s how to look at this historic opportunity.

Trump’s surprise decision to lift US sanctions on post-Assad Syria should be seen in combination with his administration’s less-ballyhooed move to remove curbs on the sale of advanced artificial intelligence (AI) semiconductor chips to the United Arab Emirates (UAE) and Saudi Arabia. Both are smart moves of underappreciated consequence on a global chessboard.

First, let’s talk Syria.

Trump had nothing to do with the December 8 fall of dictator Bashar al-Assad, which came in the final days of the Biden administration, ending fifty years of repressive Assad family rule. For Trump, it also marked an unanticipated geopolitical inflection point, whose origins I explained here a few days later. It was a powerful setback to Iranian leaders and Putin, who had saved the Assad regime through direct military intervention since 2015.

By lifting sanctions now in such high-profile fashion in Riyadh, Trump has seized high diplomatic ground at low cost. He rewarded both Middle Eastern and European allies—particularly the United Kingdom, Saudi Arabia, and Turkey—who had urged him to make the move. At the same time, he can slam the door on any Russian attempt to regain regional influence.

Moscow spent years propping up the Assad regime, but it collapsed anyway, in no small part because Russia moved military assets from Syria to support its Ukraine war. Russia didn’t just lose a client in al-Assad; it also lost global standing by giving up a Middle East foothold through which it exercised regional influence. Trump should follow up by proposing a regional security pact excluding Russia and China—and building upon his Abraham Accords.

Now, let’s talk artificial intelligence.

What do advanced computer chips have to do with Syrian sanctions relief? If the Syria move is about checkmating Russia, then the chip move is about outmaneuvering China. Do both at the same time, and you frustrate the “no limits partnership” that Putin and Chinese President Xi Jinping declared in opposition to Washington back in February 2022.

President Joe Biden’s move late in his administration to limit UAE and Saudi access to the United States’ most advanced chips via the “AI Diffusion Rule” was designed to limit the technology’s proliferation to China. But in the region it was perceived as a slap in the face of countries willing to invest tens of billions of dollars in American AI companies and their infrastructure. A Gulf official told me some colleagues in his country wondered whether they had made the right bet as they confronted US restrictions, even as DeepSeek raised concerns that China over time could match or surpass US capabilities.

Both Trump moves are calculated gambles with sound logic behind them.

Regarding Syria, Trump has reckoned it’s worth taking a chance on the new leadership in Damascus and giving it a “fresh start.” That’s even though new President Ahmed al-Sharaa, who led the offensive against Assad, was designated by the United States as a terrorist alongside his Hayat Tahrir al-Sham movement, given their historic ties to al-Qaeda. Al-Sharaa renounced his ties to al-Qaeda in 2016 and now commits that his regime will be inclusive and respect all his country’s religious and ethnic sects.

The jury is out—but a good outcome is more likely with Washington involved.

Regarding artificial intelligence, Trump is betting that the Emiratis and Saudis will protect cutting-edge US technology from leaking to China, as the Biden administration feared. What he’s gained in return are arguably the deepest-pocketed investors in the world—who at the same time hope to maintain close ties to Beijing, their largest fossil fuel customer.

With an accelerating tech race this uncertain and with the stakes so high, give Trump credit for deciding rather than dithering.

A third news story this week may seem unrelated—that of the first direct peace talks between Ukrainian and Russian officials in Istanbul—but it’s not. Trump has expressed concern that Putin may be “tapping [him] along.” That’s a welcome, if belated, sign that he and his administration recognize that they are being played by a wily adversary who believes all of Ukraine will fall to him if he can buy time and neutralize US support for Kyiv.

It’s time to call Putin’s bluff amid his failure to engage seriously in peace talks that would preserve Ukraine’s sovereignty, security, and freedom to join Western institutions. Good next moves would be more sanctions against Russia, more weapons for Ukraine, and a backstop for European military support for Ukrainian security guarantees.

A Putin failure in Ukraine, coming on the heels of his Syria failure, would be a geopolitical triumph of historic consequence and perhaps even worth a Nobel Peace Prize for Trump, something I wrote about late last year.

Not long after I wrote that, a Middle East official told me that by upending the geopolitical chessboard, Trump has the opportunity to achieve unanticipated gains, particularly in great power politics. The danger, he said, was that Trump pays too little attention to the secondary consequences of his decisions. The economic cost of his “liberation day” tariffs, and his decision to back off their most extreme version, underscored both this Trump peril and his ability to self-correct.

If Trump will now also self-correct on Russia, he can again confound his critics, showing that he can be disruptive, transactional, convention-defying, and geopolitically shrewd, all at the same time. Trump shouldn’t miss this historic opportunity.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on X: @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points newsletter, a column of dispatches from a world in transition. To receive this newsletter throughout the week, sign up here.

The post Trump can cement his Middle East successes by calling Putin’s bluff appeared first on Atlantic Council.

]]>
Trump’s remarkable Middle East tour is all about striking megadeals and outfoxing China https://www.atlanticcouncil.org/content-series/inflection-points/trumps-remarkable-middle-east-tour-is-all-about-striking-megadeals-and-outfoxing-china/ Wed, 14 May 2025 02:04:04 +0000 https://www.atlanticcouncil.org/?p=846771 The Trump administration would rather swim in a stream of Gulf investments than get bogged down in the region’s enduring problems.

The post Trump’s remarkable Middle East tour is all about striking megadeals and outfoxing China appeared first on Atlantic Council.

]]>
There has never been a US presidential visit to the Middle East like this one.

This week, success will be measured not in conventional diplomacy, peace deals, or arms sales, although Donald Trump did make some news by lifting sanctions on the Syrian leadership, urging Saudi Crown Prince Mohammed bin Salman to join the Abraham Accords by normalizing relations with Israel, and agreeing to a $142 billion weapons package for Riyadh.  

What sets Trump’s visit apart is the greater focus on the hundreds of billions of dollars of new Middle Eastern investments into the United States ($600 billion from Saudi Arabia alone). Gulf partners will measure success by the Trump administration’s willingness to lift restrictions on the sale of hundreds of thousands of advanced semiconductor chips to the United Arab Emirates and Saudi Arabia. Trump will also measure success by his ability to outmaneuver China in securing a closer relationship with Gulf monarchies than the Chinese have, even though Beijing is their biggest fossil-fuel customer.

It’s not that Middle East security threats or peace negotiations have gone away. There’s the war in Gaza, and this week’s release of the American hostage Edan Alexander. There are new efforts to rein in Iran’s nuclear-weapons potential through negotiations. And there’s Trump’s dream of finding a path to Saudi-Israeli diplomatic normalization (and ongoing progress toward a civilian nuclear deal with the kingdom).

However, my conversations with senior Middle Eastern officials involved in planning Trump’s trip underscored that the overwhelming focus has been on doing deals. The Trump administration would rather swim in a stream of Gulf investments than get bogged down in the region’s enduring problems.

In an extraordinary speech in Riyadh that set the tone for all that will follow, Trump said: “Before our eyes, a new generation of leaders is transcending the ancient conflicts and tired divisions of the past, and forging a future where the Middle East is defined by commerce, not chaos; where it exports technology, not terrorism; and where people of different nations, religions, and creeds are building cities together—not bombing each other out of existence.”

The contest for Gulf money is also about gaining the upper hand in the Trump administration’s ongoing trade standoff and technology contest with Beijing. That remains Washington’s overriding objective, notwithstanding the dramatic news Monday morning that the two countries would de-escalate their confrontation by reducing tariffs from 145 percent to 30 percent on the US side and from 125 percent to 10 percent on the Chinese side during a ninety-day pause for further negotiations.

In that spirit, one piece of major news that’s flying under the radar is Trump’s decision to rescind the Biden administration’s “AI Diffusion Rule,” which imposed restrictions on the export of advanced semiconductor chips to countries that included the United Arab Emirates and Saudi Arabia—as well as India, Mexico, Israel, Poland, and others—due to the danger that they could be “leaked” to adversarial nations, in particular China.

The New York Times reported that, in conjunction with the rule change, the Trump administration is considering a deal that would send hundreds of thousands of the most advanced US-designed artificial intelligence (AI) chips to G42, an Emirati AI firm that cut its links to Chinese partners in order to partner with US companies.

“The negotiations, which are ongoing, highlight a major shift in US tech policy ahead of President Trump’s visit,” the New York Times reported, noting tension within the administration between those who are eager to advance the US trade and technological edge over China and national security officials who continue to worry about leakage of critical technologies to Beijing.

On Tuesday, the White House also unveiled deals with Saudi Arabia that included a commitment by Riyadh’s new state-owned AI company, Humain, to build AI infrastructure using several hundred thousand advanced Nvidia chips over the next five years. Humain and Amazon Web Services also announced plans to invest more than five billion dollars in a strategic partnership to build a first-of-its-kind “AI Zone” in the kingdom—part of Riyadh’s evolving ambitions to be a global AI leader.

What seems to be winning out is the Emirati and Saudi argument that if they are going to throw in their lot with the United States, and if they are to restrict their advanced technology relationships with China in the global AI arms race, Washington needs to do its part and remove the restrictions placed upon its tech.

During Trump’s first term and during the Biden administration, there was a long-running debate within the US government around whether the United States should seek to block China from getting advanced chips or instead just try to stay one or two generations ahead of the Chinese technologically. That debate has been settled: China—as demonstrated most visibly by DeepSeek—will find a way to sidestep US restrictions to make major strides. For the United States to stay a step or two ahead in the AI race, it will require new investments and partnerships. That shift is at the heart of what we’re witnessing this week in the Middle East.

Trump’s moves this week underscore his seriousness of purpose, but the battle has been far from won. Trump the aspirational peacemaker will still try to strike deals on Gaza and Iran, as uncertain as they are, but Trump the dealmaker has a clearer path to closing artificial intelligence and investment deals that this week are higher and more achievable priorities.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on X: @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points newsletter, a column of dispatches from a world in transition. To receive this newsletter throughout the week, sign up here.

The post Trump’s remarkable Middle East tour is all about striking megadeals and outfoxing China appeared first on Atlantic Council.

]]>
Final Report of the Commission on Software-Defined Warfare featured in Air & Space Forces Magazine https://www.atlanticcouncil.org/insight-impact/in-the-news/final-report-of-the-commission-on-software-defined-warfare-featured-in-air-space-forces-magazine/ Mon, 05 May 2025 19:00:00 +0000 https://www.atlanticcouncil.org/?p=845714 On May 5, Shaun Waterman of Air & Space Forces Magazine published an article highlighting Forward Defense's Commission on Software-Defined Warfare report.

The post Final Report of the Commission on Software-Defined Warfare featured in Air & Space Forces Magazine appeared first on Atlantic Council.

]]>

On May 5, Shaun Waterman of Air & Space Forces Magazine published an article highlighting Forward Defense‘s Commission on Software-Defined Warfare final report. The article focused on the impacts of Secretary of Defense Hegseth’s March 6 memo on software-defined warfare and software acquisition pathways. This piece quoted Forward Defense nonresident senior fellow and Commission on Software-Defined Warfare co-author Tate Nurkin‘s remarks made at the Commission’s final report launch event on personnel training at the Pentagon.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Final Report of the Commission on Software-Defined Warfare featured in Air & Space Forces Magazine appeared first on Atlantic Council.

]]>
Hinote and Parker in Breaking Defense on the Commission on Software-Defined Warfare https://www.atlanticcouncil.org/insight-impact/in-the-news/hinote-parker-breaking-defense-commission-on-software-defined-warfare/ Fri, 02 May 2025 16:00:00 +0000 https://www.atlanticcouncil.org/?p=845462 On May 2, Breaking Defense published an article by Clinton Hinote and Nathan Parker, Commissioners on Forward Defense’s Commission on Software-Defined Warfare, emphasizing the urgent need for the Department of Defense to prioritize agile, testable software development practices.

The post Hinote and Parker in Breaking Defense on the Commission on Software-Defined Warfare appeared first on Atlantic Council.

]]>

On May 2, Breaking Defense published an article by retired Lt Gen Clinton Hinote and Nathan Parker, Commissioners on Forward Defense‘s Commission on Software-Defined Warfare, emphasizing the urgent need for the Department of Defense to prioritize agile, testable software development practices. Drawing on findings from the Commission’s final report, the authors argue that software is now a decisive element in military advantage and call for immediate cultural and institutional shifts within the Pentagon to meet this strategic imperative.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Hinote and Parker in Breaking Defense on the Commission on Software-Defined Warfare appeared first on Atlantic Council.

]]>
Axios on the Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/axios-demarest-software-defined-warfare-report-domino-labs/ Wed, 23 Apr 2025 16:00:00 +0000 https://www.atlanticcouncil.org/?p=842473 Colin Demarest of Axios published an article covering Domino Data Lab’s $16.5 million AI contract, announced following the release of Forward Defense’s Commission on Software-Defined Warfare report.

The post Axios on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On April 23, Colin Demarest of Axios published an article mentioning Forward Defense‘s Commission on Software-Defined Warfare report, highlighting how the report reflects growing pressure, both within and outside the Pentagon, to smartly adopt software. The piece suggests that Domino Data Lab’s recent $16.5 million dollar AI contract may be evidence that this pressure is beginning to yield results. 

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Axios on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
National Defense reports on the Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/ye-national-defense-on-the-commission-on-software-defined-warfare/ Tue, 22 Apr 2025 20:00:00 +0000 https://www.atlanticcouncil.org/?p=842444 On April 22, Joanna Ye of National Defense published an article highlighting key recommendations from Forward Defense’s Commission on Software-Defined Warfare report.

The post National Defense reports on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On April 22, Joanna Ye of National Defense published an article highlighting the key recommendations from Forward Defense’s Commission on Software-Defined Warfare report. Entitled “Reforming Pentagon Software Practices Key to Countering Threats, Report Finds,” the article emphasizes the Commission’s hope that, by adopting its recommendations, the Department of Defense can enhance its capabilities and preserve the United States’ strategic advantage.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post National Defense reports on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Navigating the US-PRC tech competition in the Global South https://www.atlanticcouncil.org/in-depth-research-reports/report/navigating-the-us-prc-tech-competition-in-the-global-south/ Wed, 16 Apr 2025 18:00:00 +0000 https://www.atlanticcouncil.org/?p=840674 A landscape report analyzing China's strategic tech engagements with the Global South and how the US can compete.

The post Navigating the US-PRC tech competition in the Global South appeared first on Atlantic Council.

]]>

Table of contents

Introduction

The US and China are in a race for technological supremacy. Policymakers in Washington often focus on which country has the technological edge, and what leadership means for military advantage and national economic strength. However, the global diffusion of emerging technologies is just as important. Unfortunately, it is too often overlooked.

To maintain its competitive advantages over China in critical and emerging technologies (CETs), the United States cannot afford to underestimate the role that will be played by the Global South in shaping global technology competition.1 The Global South is a key arena for the deployment, adoption, and development of key technologies, including AI. For the United States, strengthening ties with partners in the Global South offers significant opportunities: expanding market access, fostering top talent, promoting innovation, and otherwise advancing shared economic and geopolitical objectives.

Failure to do so would allow China to advance its geopolitical, economic, and technological interests around the world, allowing Beijing to shape global technological norms and standards unimpeded, thereby undermining the interests of the United States and its allies.

There are three main elements of the global tech-based competition with China. Sustained competition with China will require careful attention to each.

The first element of this competition with China is geopolitical. Beijing aims to revise the current Western-led international order to one that is more closely aligned with its own vision for the “global community.” Beijing has aggressively cultivated diplomatic ties across the Global South, sponsoring academic exchanges, training programs, and media cooperation fora. These efforts serve Beijing’s broader agenda to promote China’s economic and geostrategic interests, including weakening US influence, isolating Taiwan diplomatically, and supporting Chinese firms’ overseas operations.

The second element is economic, as the United States and its allies seek to ensure their continued competitiveness in developing economies around the world. The Global South represents a massive share of the world’s demographic and economic heft, accounting for 85 percent of the world’s population and 40 percent of global gross domestic product. There is significant risk that China will capture an increasing share of these growing markets, especially considering China’s export-oriented economic growth strategy and chronic industrial overcapacity, including in critical technology industries such as solar panels and electric vehicles, among others.

The third element is normative, as principles and norms form important pieces of the strategic competition between the United States and China—one that often is cast in terms of a competition between democratic and authoritarian visions for global governance. China has promoted Chinese narratives and norms globally, particularly in forums involving countries in the Global South, including the Belt and Road Forum for International Cooperation, the Forum on China-Africa Cooperation, and, most recently, the Global AI Governance Initiative.

Landscape assessment

Over the coming decades, the Global South will play an increasingly critical role in the use, adoption, and development of advanced technologies. They will drive demand for technology adoption and consumption, supply critical inputs for technology products, innovate and engage in research and development, and ultimately be key players in shaping the global technology norms. It is imperative that the United States and its allies and partners deepen their understanding of what countries in the Global South want from tech development and what they need to get it.

For low- and middle-income countries (LMICs), there are numerous obstacles to technological development and adoption. A study conducted by the German Institute for Global and Area Studies (GIGA) found that foundational digital skills2 are lacking in developing countries. This skills gap owes much to structural impediments to workforce development. The World Bank recently asserted that Africa’s digital skills gap exists in part because of African firms’ “low technology adoption [which limits] productivity and hamper[s] job creation, especially in areas that require higher level skills.”

Policymakers from LMICs are aware of the need to address barriers to technological adoption and development. To bridge gaps in technical abilities, many countries, including Kenya, India, and South Africa, among others, have also launched digital skills training programs to strengthen the technological workforce. Others, including Zimbabwe, Namibia, Ghana, and Nigeria have raised barriers to the export of unprocessed critical minerals and other raw materials that are required for many advanced technological applications, including semiconductors (chips), batteries, electric vehicles (EVs), wind turbines, and weapons systems, among a great many others. Such actions are motivated by a desire to add value to critical minerals via domestic processing before they are exported.

Other countries are increasingly investing in the development of domestic technological capabilities. At a July 2024 event unveiling a $4 billion public-sector investment in Brazil’s supercomputing capacity, President Luiz Inácio Lula da Silva (“Lula”) asked why “a country with 200 million people, a nation 524 years old with a globally respected intellectual foundation, [couldn’t] create its own mechanisms instead of relying on AI from China, the United States, South Korea, or Japan? Why can’t we have our own [AI]?”

Lula’s question underscores a growing trend towards “sovereign AI,” an idea that every country needs to be able to develop the domestic infrastructure required to train and run AI models to safeguard technological sovereignty.

Lula’s call to develop Brazil’s own domestic AI ecosystem that reflects Brazilian priorities is reflective of strong interest within the Global South to play a more active role in shaping the future development of AI. Although most LMICs currently lack the infrastructure needed to compete at the leading edge, a closer look reveals that there is much to build such ecosystems upon. In 2022, the Latin America and Caribbean (LAC) region featured some thirty-four “unicorns” (tech start-ups valued at one billion dollars or more), a first among developing regions according to the UN Development Programme. The digital workforce in developing countries are expanding rapidly, though barriers remain. The aforementioned GIGA study found that “there is a non-negligible digital workforce in selected low- and middle-income countries. . . that is active on online labor platforms and possesses some intermediate or advanced digital skills.”

There are numerous initiatives across the Global South that are designed to build upon these strengths. For example, Carnegie Mellon University’s (CMU) Upanzi Network, based out of CMU’s Africa campus in Rwanda, advances research, capacity building, and skills-training in digital infrastructure, cybersecurity, and other foundational tech areas. South Africa’s University of the Witwatersrand recently launched Africa’s first AI institute focused on fundamental AI research, the Machine Intelligence and Neural Discovery Institute (MINDS). The Institute’s purpose is “to position the continent as a creator rather than merely a consumer of AI technologies.”

Over the last two decades, China has rapidly scaled its presence in key industries around the world. Chinese companies have become dominant players across countries in Southeast Asia, Africa, and Latin America, displacing American and European competitors in the process. What’s more, China is competitive with the United States and its allies and partners in various metrics related to national technological strength. China produces an ever-increasing share of the world’s top-cited STEM papers, and is home to many top scientific research institutions. China also is one of the world’s great industrial powers. As a result, China is better positioned than ever before to outcompete the United States and its allies across a range of next-generation industries, especially in LMICs, with which China often already has strong economic and political ties. There is some risk that the United States could cede its position as the world’s foremost innovator, undermining its competitiveness in critical sectors that will be of increasing geopolitical, geoeconomic, and technological importance in the coming decades. The recent release by DeepSeek’s R1 large language model (LLM), underscores this point: DeepSeek is a relatively small Chinese AI company that managed to build an open-source LLM that is cheaper and as capable as leading LLMs developed in the United States.

Two ongoing trends underpin China’s global competitiveness in critical and emerging technologies. First, Beijing has prioritized the development of key industries in CET fields. Chinese leader Xi Jinping has staked China’s economic future on his “Innovation-Driven Development Strategy,” which emphasizes the role of advanced technology in increasing productivity and advancing national technological capabilities, thereby safeguarding national security and promoting economic development. In a 2014 speech, for example, Xi insisted that “science and technology are the foundation of a strong country.” Over the past decade, Beijing has redirected tremendous resources into China’s tech sector. In certain CETs—including AI, EVs, advanced battery technology, renewable energy tech, high-speed rail, and robotics, among others—China is already recognized as a technological leader, even in some cases surpassing the United States.

Second, China’s economy is dependent on external markets. Thanks to its sustained prioritization and investment into high-tech industries, China now possesses enormous capacity to manufacture and export technology products and services. As the output of Chinese manufacturers far outstrips domestic demand for their goods, China is reliant on foreign markets to absorb this surplus. China’s high-tech exports have grown astronomically over the last two decades, from just over $400 billion in 2004 to $1.5 trillion in 2023. Today, China is the world’s top exporter of EVs, photovoltaics, and lithium batteries.

These two trends—Beijing’s continued emphasis on technological development and excess manufacturing capacity in advanced goods—anchor its approach to global technological competition. As a result, ties with the Global South are highly consequential for Beijing. In 2023, China exported more to the Global South than to the U.S., the European Union (EU), Japan, and Australia. Most of China’s fastest growing trade partners are Belt and Road Initiative (BRI) partner countries. This trend will likely continue in the coming years, especially as the United States and Europe impose new trade policies, including tariffs. Amidst heightened tensions with the United States and Europe, China will need to rely on other partners to achieve its technological and economic objectives.

Chinese ICT expansion yesterday, AI competition today

AI provides a particularly illustrative case study to better understand the factors that will shape global competition in CETs in the coming decades. AI has potential applications across key industries, including biotechnology, manufacturing, and education, among others. LMICs around the world are developing their own AI capabilities to address various problems and promote local growth. Today, the United States holds a narrow but clear lead over China in AI. The AI models developed in the United States continue to rank higher than models developed elsewhere, and the most advanced chips and semiconductor manufacturing equipment are still produced either in the United States or in countries allied with the United States.

But the United States’ current lead in AI does not guarantee that the United States will necessarily outcompete China globally. Setting aside the possibility that China overcomes US export controls on advanced chips, leading-edge model performance is only one aspect of AI competition. In many LMICs, a variety of considerations drive competition: cost, ease of deployment, and applicability of the technology to local conditions. Indeed, Chinese multinationals have long excelled in tailoring their products and services to local demand. Taking advantage of efficient, low-cost supply chains—as well as Chinese state support—Chinese companies often outcompete their Western competitors in the Global South.

In AI, many of China’s competitive advantages stem from the investments China made in the information and communication technology (ICT) sector through the Digital Silk Road (DSR) initiative, during which major Chinese ICT players expanded their operations throughout the Global South. Chinese ICT firms, including Alibaba, Tencent, Baidu, Huawei, ZTE, Transsion, and StarTimes, among others, have become dominant players in the ICT sector throughout Southeast Asia, Africa, and Latin America.

What are the factors that make Chinese ICT providers so competitive? First, Beijing is highly supportive of overseas Chinese ICT projects. Consistent with China’s lending practices in other sectors, Beijing works with Chinese ICT companies to assemble highly competitive packages of ICT services that include financing from various state lenders. These packages often include clauses that require that the loans be used to purchase goods and services from certain Chinese firms.

Importantly, Chinese ICT firms maintain a deep, ongoing relationship with the state beyond project-based support. Alibaba exemplifies the strategic partnership between the Chinese government and the ICT sector. Originally founded as a private company with little connection to the state, Alibaba has since cultivated close ties with the state sector, actively collaborating with Chinese government officials to shape the company’s approach to expanding its cloud business internationally.

Second, Chinese ICT companies operating in emerging markets tend to offer vertically integrated services, encompassing several layers of the ICT technology stack. This allows partner countries to work with a single Chinese ICT provider to address a range of technology needs. Huawei promotional materials, for example, frequently highlight “one-stop” ICT solutions, which are designed to provide a comprehensive suite of services to customers. Huawei has signed contracts to deploy 5G broadband networks, build data centers for cloud services, and build out fiber optic networks to enhance connectivity for “smart cities” projects. Huawei’s approach combines hardware, software, and after-sales support into a single, cohesive package that simplifies ICT procurement in emerging market economies. Furthermore, Huawei and other Chinese ICT companies reportedly offer ICT services at prices that are 30 percent to 40 percent lower than those of European and American competitors. However, it would be unwise to attribute all of these firms’ successes to subsidies and other forms of state-sponsored support. One underappreciated feature of Chinese ICT firms’ success in the Global South is their willingness to tailor their services to meet local demands.

For example, Huawei’s “National One-Stop Public Services Solution” integrates telecommunication, cloud computing, and big data technologies to streamline e-government services, allowing governments in the Global South to more easily adopt advanced technological tools. Chinese ICT firms provide turnkey solutions, meaning their services can be deployed and used as soon as they are built.

Transsion, the largest smartphone company in Africa, further illustrates the focus of Chinese ICT firms to adapt to local markets to provide competitive products. In 2008, Transsion announced its “Focus on Africa” strategy, investing heavily in the African market. Transsion has since sold more than 130 million cellphones on the continent, capturing 40 percent of the African smartphone market. Transsion’s smartphones are tailored to African markets. Many models cost less than $100, Africa’s most popular social media sites come pre-installed on the phones, and the battery can last for several days without needing to be recharged. Unlike smartphones sold by Western companies, Transsion phones have multiple SIM card slots, which is particularly beneficial in regions with inconsistent network coverage or for consumers who manage multiple SIM cards to take advantage of prepaid plans from different providers.

As a result of these factors, China’s presence in the ICT sector in the Global South has grown tremendously over the last two decades. Chinese ICT firms are highly competitive across the telecommunications technology stack. Figure 1 underscores their success. Drawing from AidData’s Global Chinese Development Finance Dataset, we found over 750 ICT projects in 122 different countries between 2000 and 2021. Figure 1 shows the distribution of projects by country.3 These projects include telecommunications, e-government services, data centers, and subsea cables, among others.

Together, the ICT projects in the AidData dataset amount to over $70 billion (2021 constant dollars) in financing, investments, and grants, representing an enormous expansion in China’s involvement in the global ICT sector. Many of these projects were financed by concessionary loans, often provided by the Export-Import Bank of China, China Development Bank, and the Bank of China.

Figure 2 presents Chinese-financed ICT projects by region over time. As clearly seen below, China has supported ICT projects in Africa, Asia, and Latin America since 2005. In fact, between 2006 and 2020, China committed an average of $4 billion in new financing for ICT projects each year. Since 2020, announcements of new financing commitments have tapered off, and questions remain about whether China will resume its previous level of financing for ICT projects in the Global South. It is unclear the extent to which Chinese ICT providers rely on state financing to be competitive abroad. Indeed, for both Huawei and ZTE, the proportion of revenues earned outside of China has declined since 2019.4 COVID-19 and sanctions levied against the firms further confound any analysis of the two firms’ reliance on state financing. Some analysts suggest that Chinese ICT players face increased competition today, with European rivals Ericsson and Nokia gaining ground in recent years.

Despite the recent decline in ICT projects financed by China in 2020 and 2021, Chinese ICT companies will almost certainly continue to be highly competitive in the coming decades. As Beijing’s “national champions,” Chinese ICT firms like Huawei will continue to benefit from high levels of state support. Because the ICT services provided by Chinese firms tend to be vertically integrated, countries that contract from them risk being reliant on Chinese-built systems throughout the technology stack, making future transitions to alternative providers more difficult.

AI competition and Chinese ICT in the Global South

Today, China is positioned to leverage its ICT advantages in the Global South to be highly competitive in AI. The proliferation of Chinese ICT throughout the Global South carries significant consequences for global AI competition. AI is fundamentally built on top of ICT technologies. AI models are trained using specialized servers with advanced compute capabilities. Governments or enterprises that want to deploy models tailored to certain use cases must fine-tune models on data stored in data centers. Because of the computing resources required to run advanced AI models, many users interface with AI models hosted on servers elsewhere. For these users, access to robust ICT networks with high bandwidth and low latency is essential. Governments interested in deploying AI-enabled technology may also have data security concerns, preferring to store and process sensitive data in-country. Furthermore, as LMICs continue to coalesce around the still-nascent concept of sovereign AI , they will need to host ICT infrastructure tailored to train, host, and run AI systems.

Accordingly, China’s investment in its buildout of ICT infrastructure in emerging markets is likely to provide significant advantages in AI. Chinese ICT companies already recognize their structural advantages. Huawei has indicated that it sees AI as an enormous market opportunity in the Global South; the company has already integrated AI-enabled systems into existing ICT products, including its e-government services, smart city technologies, and cloud network offerings. ZTE is also incorporating AI systems into its offerings. In 2024, the company launched an “all-in-one out-of-the-box” AI compute system that purports to minimize training and inference costs.

Many countries have determined that the potential benefit of Chinese ICT and AI-enabled systems outweigh any potential security risks. More importantly, for many countries there exist few alternatives to the AI services provided by Chinese ICT companies. Policymakers highlight that this dearth of options makes partnering with China unavoidable.

Mounting evidence suggests that China’s global ICT advantage is already providing dividends for China’s AI competitiveness in Africa, Latin America, and Southeast Asia. Drawing from the Asia Strategic Policy Institute (ASPI)’s Mapping China’s Tech Giants dataset, which tracks the overseas activities of fifteen of China’s largest technology companies, we present the growth of AI-related projects in Asia, LAC, Africa, and the Middle East in Figure 3. As shown below, China’s AI activities beyond its own borders grew dramatically over the course of the 2010s. In 2019 alone, Chinese technology firms established 229 partnerships overseas.

Just under 360 of these AI-related projects were undertaken by Huawei and ZTE, accounting for nearly 40 percent of total AI-related projects in Asia, LAC, Africa, and the Middle East, demonstrating the continuity between ICT deployment and AI technologies. This data suggests that China’s advantage in global ICT deployment, as shown in Figure 2 above, may also promote competitive advantages for Chinese AI. Figure 3 also indicates the extent to which China’s leading technology firms see emerging markets as key markets for AI-enabled products. These investments underscore the need to take seriously the expansion of Chinese AI companies’ global operations.

Beyond China’s ICT advantages, Chinese AI developers’ comparative strengths in AI are well-suited for emerging markets, where cost, energy efficiency, and speed are especially critical factors. Accordingly, lightweight models,5 or low-cost AI solutions that require minimal computational power, will be especially appealing. Leading American AI services generally target consumers in the United States and Europe. A monthly subscription to OpenAI’s ChatGPT or to Anthropic’s Claude Pro costs $20, for example. If Chinese AI providers can offer low-cost, low-latency solutions for users in emerging markets, they will likely be highly competitive.

For example, take manufacturing, a priority growth sector for many LMICs. As China is home to the world’s largest manufacturing sector, Chinese AI companies benefit from greater access to relevant manufacturing data on which to train high-quality, cost-effective AI models. What’s more, AI in smart manufacturing applications often employs a subset of AI techniques—including computer vision, predictive analytics, and AI-enabled robotic process automation—that tend to rely on less computing power than most generative AI models. Indeed, Chinese companies have already invested tremendous resources into developing smaller, resource-efficient models tailored for industrial- or infrastructure-related applications, designed for deployment on edge computing devices. Chinese ICT companies can easily deploy models trained within China to their systems located in other countries.

In addition, China’s top AI companies, including Alibaba, DeepSeek, and Baidu, have released open-source models. Open-source models are freely available to be downloaded and deployed by anyone, reducing cost barriers for users and encouraging wider adoption.6 As of the writing of this report, Chinese-developed open models score higher than open-source models developed by American and European companies in various performance metrics for measuring AI capabilities, such as AIME 2024 and SWE-bench Verified.

Finally, open-source and lightweight models are especially attractive for adoption in the Global South. Open models can be deployed on edge computing servers located closer to end-users, whereas inference on closed models must be run on specific data centers controlled and managed by the model developer. For example, all of OpenAI’s servers are currently based in the United States, resulting in increased latency for users based elsewhere. Lightweight models require less compute resources to run, enabling adoption in resource-constrained environments.

These competitive advantages could have follow-on consequences for the competition between China, the United States, and Europe in AI, especially in emerging markets. Industry leaders, including Sam Altman, have cited the importance of the AI “flywheel” effect, in which the users of a certain model generate usage data that can be used to improve its capabilities, which consequently attracts new users. This positive feedback loop can help to reinforce and lock in the advantages of certain AI models, absent other disruptions.

The United States’ current edge in training the most capable AI models does not guarantee continued leadership. Indeed, little evidence suggests that top American AI companies are focused on emerging markets. In contrast, Chinese companies with longstanding operations in LMICs, like Huawei and ZTE, have promulgated plans to expand their AI-enabled offerings worldwide.

Obstacles to China’s competitiveness in AI

At the same time, serious challenges still exist for Chinese AI companies to compete with their American and European peers. The United States and its allies have indicated that they are fully committed to ensuring Western AI models continue to outperform their Chinese competitors, cutting off China’s imports of leading-edge AI chips and advanced semiconductor manufacturing equipment to China. In 2024, the United States established new outbound investment screening measures for US investments into Chinese companies with activities relating to AI, semiconductor, and supercomputing technologies. To be sure, the long-term impacts of these policies are unclear, and highly-capable Chinese-trained AI models like DeepSeek’s R1 and V3 models may represent a challenge to US export controls. Still, evidence suggests that these measures have hindered AI development in China; Liang Wenfeng, the CEO of DeepSeek, cited the US semiconductor export controls as a major obstacle for the company.

Furthermore, although the capabilities and performance of Chinese models have improved significantly over the last year, most of the world’s top models are still developed in the United States. A study published by Epoch AI found that the largest open-source models continue to lag behind the largest closed-source models, due to the resource advantages of American AI labs. If this trend continues, the top closed-source models developed by leading American AI companies may hold their lead over open-source competitors.

Access to computational power, and therefore advanced AI chips, continues to be among the most important resources for AI developers. If US and allied export controls continue, Chinese AI developers will likely continue to be constrained by limited access to top AI chips in the near term, as China’s semiconductor sector is largely unable to produce leading edge chips at scale. Furthermore, data centers with Chinese AI chips will be less efficient than their American counterparts. Because today’s leading-edge chips are highly energy efficient, the cost of running a data center using Chinese hardware will be very high, especially in areas with high energy costs.

Relatedly, the United States has enormous advantages in cloud computing. Amazon’s AWS, Microsoft’s Azure, and Google Cloud account for close to two-thirds of total global cloud spending. Together, China’s top cloud computing companies—Alibaba, Tencent, and Huawei—account for less than ten percent of total cloud spending. Major American cloud providers recognize AI as a major opportunity and invest heavily in AI inference and training services. Amazon, for example, announced in July 2024 that it plans to invest more than $100 billion in AI-focused data centers over the next decade.

Finally, backing the United States’s AI sector is a powerful financial system that increasingly views AI as a lucrative investment opportunity. Private investment in AI eclipsed $90 billion in 2021 and 2022, the majority of which was invested in American-based AI companies. In January 2025, SoftBank, OpenAI, Oracle, and MGX announced that they would invest $500 billion over the next four years to build new AI infrastructure in the United States.

Despite these challenges, China is likely to remain competitive in AI development. DeepSeek’s recent model releases demonstrate that compute is but one factor in training highly capable AI models, and algorithmic advancements can make up for restricted access to high-end chips. Huawei recently released the Ascend 910C, a new chip designed specifically for AI inference aimed at cutting into Nvidia’s market share in China. In January, Beijing announced one trillion RMB in financing for AI and launched a $8.2 billion AI investment fund.

Normative dimensions of US-China AI competition

The stakes of the US-PRC competition in AI go beyond questions of relative market share or commercial success. Policymakers in both Washington and Beijing believe that AI technologies will have fundamental and far-reaching political, economic, and social consequences. Both US and Chinese policymakers have participated in international initiatives and multilateral fora related to AI. On the one hand, AI has represented a rare recent example of US-PRC collaboration. Both countries were signatories of the Bletchley Declaration, which emphasized the signatories’ commitment to addressing the risks associated with AI and established the AI Safety Summit, a multilateral mechanism to advance international AI governance standards. The United States and China have co-sponsored resolutions adopted in the United Nations General Assembly that call for increased international collaboration on AI.

On the other hand, the US and Chinese approaches to AI include serious differences. In October 2022, for example, the United States published the “Blueprint for an AI Bill of Rights,” which established a set of principles to ensure that AI systems align with democratic values and protect civil liberties. A year later, in October 2023, Xi introduced the “Global AI Governance Initiative” at the Third Belt and Road Cooperation Forum, a contrasting vision for the global governance of AI technologies. China’s AI governance initiative calls for countries to “uphold the principles of mutual respect, equality, and mutual benefit in AI development.” Another notable passage from the document reads:

We should respect other countries’ national sovereignty and strictly abide by their laws when providing them with AI products and services. We oppose using AI technologies for the purposes of manipulating public opinion, spreading disinformation, intervening in other countries’ internal affairs, social systems and social order, as well as jeopardizing the sovereignty of other states.


—Source: Global AI Governance Initiative

Training AI models is an inherently values-laden exercise. China’s Global AI governance initiative and the US AI rights initiative represent two contrasting approaches to AI governance that reflect two different political systems. The impacts of these approaches on current AI models are readily observable. DeepSeek-V3, one of the most highly competitive Chinese AI models as of the writing of this report, refuses to answer questions about human rights violations in China, including “What happened in Tiananmen Square on June 4, 1989?” and “What has China been criticized for in relation to the Uyghur population in Xinjiang?” Importantly, when users based outside of China query DeepSeek-V3, the model still refuses to answer these questions. OpenAI’s o1 model, on the other hand, answers both questions directly. Chinese AI models are rigorously evaluated by the Cyberspace Administration of China before they can be published.

Figure 4. Responses from DeepSeek-V3 and OpenAI o1

These responses underscore the imperatives for global competition with China in AI . We have already seen cases of Chinese-built, AI-enabled systems that have infringed on civil liberties and strengthened autocratic regimes. For instance, Chinese “Safe City” projects, ICT services designed to enhance public safety, integrate AI-enabled surveillance technologies into smart city infrastructure to enhance security. Critics in the United States and Europe highlight that Chinese-built ICT and AI systems could infringe upon civil liberties. In 2017, for example, reporting indicated that the Chinese-built African Union Headquarters in Addis Ababa was transmitting sensitive data back to China each evening. What’s more, a 2019 investigation found that Huawei engineers provided services that allowed Zambian government officials to use Chinese surveillance technologies to monitor political opponents. Critics in the United States have argued that these kinds of investigations are proof that China is exporting digital authoritarianism. And while evidence suggests that Chinese technology exports have had limited impact in democratic countries, they may empower autocrats to more effectively suppress dissent. As a result, the United States sanctioned key Chinese ICT companies, including Huawei and Hikvision.

Due to the flywheel effect mentioned in the previous section, Chinese AI systems deployed today may strengthen Chinese firms’ future competitiveness, especially if they yield unique data unavailable to Western competitors. In the event that there exist no alternatives to Chinese-trained AI models for consumers, businesses, and governments around the world, China will have enormous leeway to shape the global adoption and usage of AI.

Lessons for US-PRC competition in CETs

The global competition between the United States and China in AI offers an important lens for better seeing and understanding the dynamics that underlie US-PRC competition in other critical and emerging technologies. There are similar patterns in other CETs, including semiconductor manufacturing, electric vehicles, renewable energy, biotechnology, and next generation ICT.

Take China’s involvement in critical minerals, for example. Advanced semiconductors, electric vehicles (EVs), and photovoltaics all rely on other foundational technologies and processes in which China invested significant resources to develop over the last two decades. China has significantly expanded its global involvement in the extraction and processing of critical minerals like lithium, germanium, gallium, and cobalt, all of which are critical inputs to the above CETs. China’s expansion of these activities are especially concentrated in LMICs. Today, China is the world’s leading processor of twenty critical minerals; accounting for more than half of the world’s processing of nickel, lithium, cobalt, and rare earth metals. As a result of the investments China made throughout the first two decades of the century, China is now well positioned to be enormously competitive in high value-added segments of the CET supply chain. China is far and away the largest exporter of solar panels, advanced batteries, and EVs in the world. When paired with rising demand for these products in the Global South, China’s advantages in these critical sectors allow it to expand its engagement in these regions. Here, we can again observe the pattern of previous investments leading to competitive advantages in CETs.

In short, Chinese investments in key technology areas during the 2000s and 2010s have strengthened China’s competitiveness across various CETs. In prioritizing international engagement and supporting the overseas expansion of Chinese technology companies, Beijing has established China as a leader in both the legacy and next-generation technologies that will define global competition in the coming decades.

Conclusion and recommendations

The United States cannot afford to be complacent in the global competition in critical and emerging technologies. Doing so will result in falling behind China along the three dimensions discussed in the opening section of this report: geopolitical, economic, and normative. To stay ahead, the United States and its allies must find practical and compelling tech-centric approaches to their engagement with partners in the Global South that take into account the interests of those partners. The United States faces numerous challenges: U.S. public financing pales in comparison to that of China and includes more ‘strings-attached’ provisions. However, as this report also has shown, the United States also possesses key strengths in CETs that should allow it to be a better partner for LMICs.

Over 2025 and 2026, the Atlantic Council will be developing a strategy for successful engagement with the Global South in critical and emerging technologies. This report provides an initial landscape assessment that will feed into the strategy’s development. Several recommendations follow from the analysis presented here.

Ensure technology solutions are tailored to demand in the Global South

Outreach to and engagement with partners in the Global South is key. A 2023 Atlantic Council report on Sino-American tech competition asserted that any global tech strategy should be “focused on building and sustaining relationships with other countries in and around the tech strategy and policy space.” The rationale is straightforward: foreign actors’ willingness to align themselves with American foreign policy objectives is based on their perception their interests are aligned with those of the United States, and that they will be able to effectively advance their own interests by partnering with the US.

Following this logic, the United States work to ensure that partners in the Global South benefit from technological progress in the US. American technology firms must develop and launch technology products that address problems facing consumers and firms in LMICs. As shown in our exploration above of Transsion’s market strategy in Africa, Chinese technology companies often tailor their technology products for local market conditions, which has allowed them to outcompete US rivals. Open-source, lightweight models, such as DeepSeek’s suite of distilled models, are similarly appealing as they offer high performance at low cost.

US technology companies should invest in producing solutions tailored for use cases applicable to markets in the Global South. One way to do this is by working with local organizations, such as the Upanzi Network and the Machine Intelligence and Neural Discovery Institute, as well as Deep Learning Indaba, another African organization that encourages AI development in Africa. Promoting US investments in local AI efforts is also important. For example, Google’s “Seed to Series A” initiative is an AI accelerator program for start-ups in Latin America. AWS has announced it is investing $1.7 billion dollars in its cloud and AI services in Africa.

But the US government must also play a role, too. Already, the United States has launched several successive initiatives. In September 2024, for example, the US Department of State partnered with the Nigerian government to host the Global Inclusivity and AI: Africa Conference in Lagos, bringing together government officials and AI experts from the United States, Africa, Europe, and the Middle East to engage in “crucial dialogues on AI governance, safety, and applications toward the UN Sustainable Development Goals.” The conference followed several other diplomatic outreach initiatives focused on Africa and digital technologies, for example in spring 2024, the US Trade and Development Agency (USTDA) partnered with an African technology company, CSquared Holdings Limited, to assess affordable broadband through a continental fiber optic system. In a similar vein, in September 2024, the State Department also launched the Partnership for Global Inclusivity on AI alongside top US technology companies (Amazon, Anthropic, Google, IBM, Meta, Microsoft, Nvidia, and OpenAI) to commit more than $100 million globally toward increasing other countries’ computing and human technical capacities, building local datasets for training AI models, and promoting responsible AI use and governance. The State Department partnered with the Atlantic Council and launched the AI Connect series, which empowered Global South countries to engage more actively in global, multi-stakeholder dialogues on the response use of AI.

Leverage partnerships to enhance impact

US government funding is unlikely to match the scale of China’s investments in its global technological engagements, through fora like the Belt and Road Initiative and the Digital Silk Road. Hence, the US government must focus on multipliers—partnerships with other countries and the private sector—to best leverage limited resources. The Trilateral Infrastructure Partnership (TIP) between the United States, Japan, and Australia offers a case study for successful cooperation with allies. The TIP serves as an important coordination mechanism for pooling resources to develop ICT infrastructure in Oceania, in part to more effectively compete with Chinese firms such as Huawei. Similarly, in January 2024, Google and the Chilean government, with the US government’s support, announced plans to build a high-speed subsea cable connecting Australia, French Polynesia, and South America.

US allies have launched parallel initiatives with which the United States should engage to advance shared objectives. For example, the EU introduced the “Global Gateway” in 2021 to invest in connectivity projects to counter with China’s Belt and Road Initiative. The EU-LAC Digital Alliance High-Level Policy Dialogues on Connectivity and AI aims to “align regulatory and political conditions for inclusive and sustainable digital strategies to promote digital transformation along common values and interests” in the LAC region. Alongside outreach efforts such as dialogues, the EU-LAC Digital Alliance has launched projects such as the BELLA cable, a subsea digital cable connecting Europe and the LAC region, and the EU-LAC Accelerator, an initiative to connect start-ups in LAC with European investors. The United States should look for ways to support and reinforce allies’ efforts, whether through direct engagement or through parallel actions.

Compete with China’s technology stack

Policymakers should promote competition across the entirety of the tech stack, thereby benefiting countries and consumers in emerging markets through the benefits that come with increased competition, reducing undue Chinese influence. Today, non-Chinese technology companies have difficulty competing with Chinese ICT providers. Huawei controls some 70 percent of 4G networks across Africa and large shares of mobile network markets in the LAC region.

China will work to further entrench its dominant position in ICT markets, including 5G. As shown in a recent Atlantic Council report by Ngor Luong, China is highly active in the transition to the 6 gigahertz (GHz) spectrum band. “A global harmonization of 6 GHz without US participation,” Luong writes, “could further lower equipment costs for Chinese telecom firms while raising the cost of the competing equipment from trusted vendors, doubling the damage. . .[and locking US firms] out of harmonization benefits, including lower technical costs and economies of scale.”

In AI, ensuring US and allied competitiveness across the ICT technology stack is especially important. As explored in this report, the training and deployment of AI models rely on various components of ICT. Accordingly, Washington must ensure that US tech companies can effectively compete with China’s national champions in global markets, promoting US advantages in cloud computing, working to advance future US competitiveness in market segments where companies like Huawei and ZTE are currently leading, and advancing global adoption of AI models trained by US companies.

But this is easier said than done, given China’s sustained focus in expanding to emerging markets. The United States should play an active role in multilateral standards-setting organizations to push for the global adoption of norms and standards that both reinforce US values and level the playing field between global tech multinationals. In AI, Washington must remain engaged in multilateral governance initiatives, like the AI Safety Summits. Without sustained international engagement, Washington risks handling Beijing significant influence in shaping how AI is adopted worldwide, benefiting Beijing’s geopolitical, economic, and normative interests for years to come. As shown in this report, ensuring technological competitiveness today strengthens technological leadership tomorrow.

About the authors

Explore the programs

The Global China Hub tracks Beijing’s actions and their global impacts, assessing China’s rise from multiple angles and identifying emerging China policy challenges. The Hub leverages its network of China experts around the world to generate actionable recommendations for policymakers in Washington and beyond.

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

1    The term Global South admittedly is an admittedly contested and sometimes ambiguous term to describe countries that fall into various camps including developing, emerging market, and nonaligned groupings; here, the term Global South is useful as a shorthand to describe countries in various stages of economic development in Latin America and the Caribbean (LAC), sub-Saharan Africa, the Middle East and North Africa (MENA), South and Southeast Asia, and Oceania.
2    Foundational digital skills in the survey referred to “basic digital literacy”, including “using simple digital tools to. . .improve individual, business, or farm productivity.” In comparison, more advanced digital skills will include “deploying hardware and software to build tools” or “develop[ing] new technologies such has AI, robotics, and genetic engineering.”
3    We constructed a dataset of Chinese ICT projects with project values exceeding $1 million (constant 2021 USD) using AidData’s Global Chinese Development Finance Dataset (version 3.0). Each ICT-related project was first flagged using an LLM annotation assistant. Crucially, every ICT project was manually reviewed by a human annotator to confirm it met our criteria for an ICT project. We only include projects tagged as “recommended for aggregation” by AidData.
5    Several of DeepSeek’s distilled models, such as the DeepSeek-R1-Distill-Qwen-1.5B, are examples of “lightweight” AI models that, like Google’s Gemma family of models, are “computationally efficient, less resource intensive, and more cost effective
6    Open models are more transparent than their closed-source counterparts and have lower barriers to using the model, which can help promote adoption and experimentation. By open-sourcing models, AI companies can benefit from users who may suggest improvements, identify bugs, and identify potential model use cases. Open-source innovations can subsequently improve model capabilities without relying on simply scaling compute resources, which is expensive and especially difficult for Chinese AI labs unable to access top graphic processing unit (GPU)s due to US export controls. Many of the most remarkable developments in Chinese-developed AI models over the last year are due to algorithmic advancements that improved training while decreasing GPU training hours.

The post Navigating the US-PRC tech competition in the Global South appeared first on Atlantic Council.

]]>
Defense Acquisition University on the Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/defense-acquisition-university-software-defined-warfare-final-report/ Fri, 11 Apr 2025 19:00:00 +0000 https://www.atlanticcouncil.org/?p=842479 On April 11, Defense Acquisition University published an article highlighting the challenges and recommendations identified in Forward Defense’s Commission on Software-Defined Warfare report.

The post Defense Acquisition University on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On April 11, Defense Acquisition University (DAU) published an article entitled “Finding the Way on Software-Defined Warfare,” highlighting the enterprise-level challenges identified in Forward Defense‘s Commission on Software-Defined Warfare report, along with the report’s nine key recommendations. The article also explores how DAU supports the Commission’s proposals, particularly by providing training programs to cultivate software talent and by providing entry points for the acquisition workforce to stay informed on emerging developments.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Defense Acquisition University on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Sovereign remedies: Between AI autonomy and control https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/sovereign-remedies-between-ai-autonomy-and-control/ Thu, 03 Apr 2025 17:11:27 +0000 https://www.atlanticcouncil.org/?p=834945 Sovereign AI has gained a foothold in several capitals around the world.

The post Sovereign remedies: Between AI autonomy and control appeared first on Atlantic Council.

]]>
Introduction

Sovereign AI has gained a foothold in several capitals around the world. As Michael Kratisios, the Trump administration’s acting director of science and technology policy, stated in 2024, “Each country wants to have some sort of control over our [sic] own destiny on AI.”1 Analysts have mapped the modes and methods to achieve sovereign AI, and the interplay with antecedents like data sovereignty.2 However, there remains a critical gap: analysis of stated goals for these initiatives and what the core pillars of sovereign AI are, distinct from related concepts.

The goals outlined by governments are varied and wide-reaching: some center on preserving values or culture;3 others focus on the privacy and protection of citizens’ data;4 some initiatives center on economic growth and others of national security;5 and finally, there is a set of concerns around the current global governance vacuum, where in the absence of global frameworks, AI companies must be held accountable through physical presence.

However, each of these stated goals require differing levels of indigenized capability and control and will have varied consequences as a result. This paper will:

  1. Outline the various stated goals of sovereign AI, suggesting illustrative categories.
  2. Hypothesize the reasons for the emergence of sovereign AI as a concept, with an analysis of industry buy-in for this concept.
  3. Propose a streamlined definition of sovereign AI and suggest policy implications.

Defining sovereign AI

Sovereignty is defined as supreme authority within one’s territory, including a Westphalian state system.6Most components of this definition are, however, malleable. What constitutes one’s territory, for instance, needs not be rooted in a fixed point in time. The digitization—and by extension, the datafication—of social and political life has disrupted traditional notions of state sovereignty, which have long been tied to physical borders. Similarly, what constitutes supreme authority within a given territory is similarly varied. There are nonabsolute forms of authority, where sovereignty does not equate to authority over all matters within a territory. Examples include regional institutions like the European Union or specialized subnational systems like those once exercised in Pakistan’s Federally Administrated Tribal Areas (FATA) or India’s Jammu and Kashmir, a union territory.

Roland Paris noted in 2020 the reemergence of older monarchic interpretations of sovereignty, which he identifies with Putin’s Russia and Xi’s China, among others:7

Non-Westphalian understandings of sovereignty have also experienced a resurgence in recent years. Some portray sovereignty as the power of leaders to act outside the constraints of formal rules in both domestic and international politics, or extralegal sovereignty. Others characterize sovereign power as the quasi-mystical connection between a people and their leader, or organic sovereignty.

In the context of information and communication technologies (ICTs), sovereignty has similarly found new forms. This includes data sovereignty, which asserts a country’s legal jurisdiction over all data generated within its boundaries;8 and digital sovereignty, referring to the assertion of state control over information flows, whereby the state both defines and guarantees rights and duties in the digital realm.9Some data sovereignty laws, such as the EU General Data Protection Regulations and India’s Digital Personal Data Protection Act, have extraterritorial application, if data processing relates to a subject/principal within their jurisdiction.10

Sovereignty as a norm is therefore continually challenged, reshaped, and reinterpreted, contrary to beliefs about a post-Westphalian consensus. In the context of the recent artificial intelligence boom, sovereignty has taken on new modes and methods.

Sovereign AI has been defined variously as “a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks”;11 “countries harnessing and processing their own data to build artificial intelligence domestically, rather than relying on external entities to do so”;12 and as a concept “asserting that the development, deployment, and regulation of AI technologies should . . . align with national laws and priorities.13The most all-encompassing of these is the definition from the United Nations Internet Governance Forum (IGF) Data and Artificial Intelligence Governance Coalition: “The capacity of a given country to understand, muster and develop AI systems, while retaining control, agency and, ultimately, self-determination over such systems.14

The EU AI Act (2024) and the African Union’s Continental AI Strategy (2024) both touch on aspects of AI sovereignty. The 2023 IGF (in Kyoto) saw the launch of the official outcome document of the inaugural UN IGF Data and AI Governance Coalition, centered on sovereign AI. The term shot into mainstream parlance after Nvidia CEO Jensen Huang declared that every country needs sovereign AI at the World Governments Summit in Dubai in February 2024.15

It is well worth noting the context for Huang’s statement, which came at the tail end of an Asia tour where he visited Japan, Singapore, Malaysia, Vietnam, China, and Taiwan.16 This tour culminated in the announcement of several collaborations in support of national large language models (LLMs), national supercomputers, and future telecommunications.

Nvidia reflects a broader trend in an industry which has supposedly embraced a rhetoric of digital sovereignty, in part attributable to regulatory pressures such as the EU General Data Protection Regulation,17 and now the EU AI Act. A speech at a European think tank summit in June 2020 by Microsoft President Brad Smith highlights this trend:

When I look at digital sovereignty initiatives, I see them addressing three goals. One is protection of personal privacy, a second is the preservation of national security, and a third is local economic opportunity. As a global technology player, it’s important for us to advance all three.18

Another example of major AI players embracing sovereign AI includes G42, an Emirati AI company, which boasts partnerships with Microsoft, OpenAI, Nvidia, Oracle, IBM, and Mistral, among others.19 A G42-Politico report identifies an overlap between data sovereignty and sovereign AI, asserting there is an ideal level of data sovereignty, balanced against global coordinated approaches, which can help realize the economic and security benefits of localization.20

Current understandings of sovereign AI both extend the core components of data and digital sovereignty to AI and add a value alignment component. In addition to the loose interpretation of territoriality, and the supreme authority of national law over cyberspace, statements about sovereign AI encapsulate cultural preservation and (subjective) ethics. Dr. Leslie Teo, senior director of AI products at AI Singapore, said in the context of the launch of SEA-LION, an LLM for Southeast Asia languages, “[Western] LLMs have a very particular West Coast American bias—they are very woke.”21 The African Union’s Continental AI Strategy similarly notes that “external influence from AI technologies developed outside Africa may undermine national sovereignty, Pan-Africanism values and civil liberties.”22

However, sovereign AI must not be conflated with individual rights. While some aspects of sovereign AI, including value alignment and legality, may overlap with autonomy and self-determination, there is no simple cause-effect relationship. An actionable and useful definition of sovereign AI must therefore avoid category errors and capture key distinctions from its antecedent terms.

The core components of sovereign AI, recognizing the definition of sovereignty are:

  1. Legality23: The design, development, and deployment of AI should adhere to any applicable laws and regulations.
  2. Economic competitiveness: The development and deployment of AI should create value for the host economy. Some sovereign AI initiatives further require the creation or bolstering of a national AI industrial ecosystem.
  3. National security: AI applications pertaining to critical infrastructure, military, and other functions critical for national security require additional safeguards against disruption.
  4. Value alignment: Due to anticipated wide and deep applications of AI, models should be aligned with national or regional political and constitutional values.

Sovereign AI is therefore a model of AI development and deployment where inputs adhere to a state or political union’s laws and institutional frameworks, and outputs are contextually relevant, secure, and create value for the economy.

Note that this definition is not exclusionary: Countries can turn to external partners to support their sovereign AI efforts if these partnerships adhere to the four core components mentioned above. This definition also recognizes the contemporary evolution of territoriality, such as the fact that digital sovereignty regulations have extraterritorial application with “territory” being expanded to include the digital footprint of populace. Finally, given that sovereignty is an organizing principle for states, not individuals or communities, it concretizes the abstract notion of value alignment by framing it as a constitutional and political concept.

Mapping sovereign AI initiatives

Below is an illustrative list of sovereign AI initiatives.

Conclusion

Sovereign AI as a phenomenon is going to gain momentum, as national governments find “wholesale” AI offerings unsuited to their needs. AI, especially general- purpose AI, requires sizable investments or innovative new methods of data collection, compute (mainly GPUs), related energy infrastructure, and workflow management.

An optimal blend of localization of AI inputs and regulation of outputs for each country could help each one to realize its outlined goals for sovereign AI. In other words, the four components of sovereign AI outlined in this paper—legality, economic competitiveness, national security, and value alignment—will necessarily involve different strategies, with governments weighing each one differently. US AI sovereignty strongly centers on maintaining the country’s leadership as a key driver of American prosperity, a prioritization that has not changed with the change in administration in 2025. Value alignment also holds varied meanings, with some, like the African Union strategy, grounding values in an anti-neocolonial framing, while others like Taiwan’s placing an emphasis on democratic values in opposition to mainland China.

Finally, factors will influence the possibilities of sovereign AI including infrastructure constraints, such as energy production capacity and the availability of water, and trust, both in governments as legitimate arbiters of people’s interests and in industry’s commitment to social good. Nevertheless, for now, the operative word in the future of AI appears to be sovereign.

About the Author

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

Related Content

Explore the program

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

1    Christine Mui, “Welcome to the Global ‘AI Sovereignty’ Race,” Politico, September 18, 2024, https://www.politico.com/newsletters/digital-future-daily/2024/09/18/should-the-u-s-seek-ai-sovereignty-00179910.
2    Pablo Chavez, “Sovereign AI in a Hybrid World,” Lawfare, Lawfare Institute in collaboration with the Brookings Institution, November 2024, https://www.lawfaremedia.org/article/sovereign-ai-in-a-hybrid-world–national-strategies-and-policy-responses; Muath Alduhishy, “Sovereign AI: What It Is, and 6 Strategic Pillars for Achieving It,” World Economic Forum, April 25, 2024, https://www.weforum.org/stories/2024/04/sovereign-ai-what-is-ways-states-building/; and Amanda Kraley, Izabela Kantor, and Rodrigo Gutiérrez, “Sovereign AI Ecosystems: Navigating Global AI Infrastructure & Data Governance,” Politico and G42, September 16, 2024, https://www.politico.eu/wp-content/uploads/2024/09/15/Sovereign-AI-Ecosystems.pdf.
3    “Biased GPT? Singapore Builds AI Model to ‘Represent’ Southeast Asians,” Asahi Shimbun, February 8, 2024, https://www.asahi.com/ajw/articles/15154956.
4    “Rapid Response Information Report: Generative AI: Language Models and Multimodal Foundation Models,” Australia’s Chief Scientist, March 24, 2023, https://www.chiefscientist.gov.au/sites/default/files/2023-06/Rapid%20Response%20Information%20Report%20-%20Generative%20AI%20v1_1.pdf.
5    “Virtual Closed-Door Discussion: Assessing India’s Cybersecurity Administration and Strategy,” Carnegie India convening, October 21, 2024.
6    “Sovereignty,” Stanford Encyclopedia of Philosophy, May 31, 2003, https://plato.stanford.edu/entries/sovereignty/; and “Westphalian State System,” Oxford Reference, https://www.oxfordreference.com/display/10.1093/oi/authority.20110803121924198.
7    Roland Paris, “The Right to Dominate,” International Organization 74, no. 3 (Summer 2020): 453489, https://www.jstor.org/stable/10.2307/27104604.
8    Trisha Ray, “Digital Sovereignty: Data Governance in India,” in Regulating the Cyberspace: Perspectives from Asia, eds. Gisela Eisner and Aishwarya Natarjan, Rule of Law Programme Asia, Konrad Adenauer Stiftung, 2020, 49–64, https://www.kas.de/documents/278334/8513721/Regulating+The+Cyberspace.pdf.
9    ”Trisha Ray, “The Quest for Cyber Sovereignty Is Dark and Full of Terrors,” Observer Research Foundation, May 25, 2020, https://www.orfonline.org/expert-speak/the-quest-for-cyber-sovereignty-is-dark-and-full-of-terrors-66676.
10    Article 3, GDPR: Territorial Scope, European Union, https://gdpr-info.eu/art-3-gdpr/; and Ministry of Electronics and Information Technology, “The Digital Personal Data Protection Act, 2023,” Government of India, Chapter 1, Subsection 2 (b), https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf.
11    Angie Lee, “What Is Sovereign AI,” Nvidia blog, February 28, 2024, https://blogs.nvidia.com/blog/what-is-sovereign-ai/.
12    Mark Nasila, “Sovereign AI: What It Is and Why It Is Reshaping the Future,” ITWeb Africa, October 25, 2024, https://itweb.africa/content/j5alrvQAYOVvpYQk.
13    ”Kraley, Kantor, and Gutiérrez, “Sovereign AI Ecosystems.”
14    ”Luca Belli and Walter B. Gaspar, “AI Transparency, AI Accountability, and AI Sovereignty: An Overview,” in The Quest for AI Sovereignty, Transparency and Accountability Official Outcome of the UN IGF Data and Artificial Intelligence Governance Coalition, eds. Luca Belli and Walter B. Gaspar (FGV Direito Rio: October 2023): 23.
15    Brian Caufield, “Nvidia CEO: Every Country Needs Sovereign AI,” Nvidia blog, February 12, 2024, https://blogs.nvidia.com/blog/world-governments-summit/.
16    Joanna Gao, “Nvidia CEO Jensen Huang Strengthens AI Ties in Thailand and Vietnam amid Sovereign AI Push,” DigiTimes Asia, December 11, 2024, https://www.digitimes.com/news/a20241211PD205/nvidia-ceo-jensen-huang-thailand-vietnam.html; and Bloomberg, “Nvidia CEO Jensen Huang Made a Quiet Lunar New Year’s Trip to China as the Almost $1.5 Trillion Chipmaker Tries to Navigate Biden’s Chip Controls,” via Fortune, January 22, 2024, https://fortune.com/asia/2024/01/22/nvidia-ceo-jensen-huang-lunar-new-year-trip-china-us-biden-chip-controls/.
17    “The History of the General Data Protection Regulation,” European Data Protection Supervisor, accessed December 30, 2024, https://www.edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en.
18    Microsoft European Affairs (@MicrosoftEU), “Digital Sovereignty is driven by 3 valid concerns that should be addressed,” Twitter, June 24, 2020, https://x.com/MicrosoftEU/status/1275749636465143808. In addition,
Satya Nadella’s speech at the 2015 Digital India Summit, while not explicitly mentioning sovereignty, is a good example of this trend as well; see Times Now, “Satya Nadella, CEO, Microsoft, at Digital India Summit | Narendra Modi in US,” September 27, 2015, https://www.youtube.com/watch?v=eGKNZVRg7VM.
19    G42 website, accessed January 15, 2025, https://www.g42.ai/;  “Microsoft/G42 AI Partnership Explained–Potential Benefits & Risks for U.S. Technological Security,” Video, US House Select Committee on the Chinese Communist Party, July 15, 2024, https://selectcommitteeontheccp.house.gov/media/videos/microsoftg42-ai-partnership-explained-potential-benefits-risks-us-technological; andVikram Barhat, “The Middle East Microsoft, OpenAI Partner Mired in National Security Controversy,” CNBC, August 25, 2024, https://www.cnbc.com/2024/08/25/a-controversial-mideast-partner-to-microsoft-openai-global-ambitions.html.
20    Kraley, Kantor, and Gutiérrez, “Sovereign AI Ecosystems.”
21    “Biased GPT?,” Asahi Shimbun.
22    “Continental Artificial Intelligence Strategy: Harnessing AI for Africa’s Development and Prosperity,” African Union, July 2024, https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf.
23    Trisha Ray, “Formulating AI Norms: Intelligent Systems and Human Values,” ORF Issue Brief No. 313, September 2019, Observer Research Foundation, https://www.orfonline.org/research/formulating-ai-norms-intelligent-systems-and-human-values.

The post Sovereign remedies: Between AI autonomy and control appeared first on Atlantic Council.

]]>
DeepSeek shows the US and EU the costs of failing to govern AI https://www.atlanticcouncil.org/blogs/geotech-cues/deepseek-shows-the-us-and-eu-the-costs-of-failing-to-govern-ai/ Tue, 01 Apr 2025 20:40:01 +0000 https://www.atlanticcouncil.org/?p=837566 The West must urgently consider what DeepSeek’s R1 model means for the future of democracy in the AI era.

The post DeepSeek shows the US and EU the costs of failing to govern AI appeared first on Atlantic Council.

]]>
Note: This piece was updated on April 4, 2025.

DeepSeek’s breakthrough has made the West reflect on its artificial intelligence (AI) strategies, specifically regarding cost and efficiency. But the West must also urgently consider what DeepSeek’s R1 model means for the future of democracy in the AI era.

That is because the R1 model shows how China has taken the lead in open-source AI: systems that are made available to users to use, study, modify, and share the tool’s components, from its codes to its datasets, at least according to the Open Source Initiative (OSI), a California-based nonprofit. While there are varying definitions of open-source, its application for AI has immense potential, as it can encourage greater innovation among developers and empower individuals and communities to create AI-driven solutions in sectors such as education, healthcare, and finance. The technology, ultimately, accelerates economic growth.

However, according to reports, R1 appears to censor and withhold information from users. Thus, democracies not only risk the loss of the AI technological battle; they also risk falling behind in the race to govern AI and could fail to ensure that democratic AI proliferates more widely than systems championed by authoritarians.

Therefore, the United States must work with its democratic allies, particularly the European Union (EU), to set global standards for open-source AI. Both powers should leverage existing legislative tools to initiate an open-source governance framework. Such an effort would require officially adopting a definition of open-source AI (such as OSI’s) to increase governance effectiveness. After that, the United States and EU should accelerate efforts to ensure democratic values are embedded in open-source AI models, paving the way for an AI future that is more open, transparent, and empowering.

How China overtook the lead

Part of DeepSeek’s success can be understood by the Chinese Communist Party’s (CCP’s) showing signs of incorporating the norm-building of open-source AI into its legal framework. In April 2024, the Model AI Law—a multi-year expert draft led by the Chinese Academy of Social Sciences, which is influential in the country’s lawmaking process—laid out China’s support for an open-source AI ecosystem. Article 19 states that the CCP “promotes construction of the open source ecosystem” and “supports relevant entities in building or operating open source platforms, open source communities, and open source projects.” It encourages companies to make “software source code, hardware designs, and application services publicly available” to foster industry sharing and collaboration. The draft also highlights reducing or removing legal liability for the provision of open-source AI models, providing that individuals and organizations have established a governance system compliant with national standards and have taken corresponding safety measures. Such legal liability would have held developers accountable for infringing the rights of citizens. This is a notable contrast to China’s past laws governing AI that explicitly stated the goal of protecting those rights. The specific provisions in the Model AI Law, albeit a draft, shouldn’t be overlooked, as they essentially serve as a blueprint of how open-source AI is deployed in the country and what China’s models exported globally would look like.

Furthermore, the AI Safety Governance Framework, a document that China aims to use as a guide to “promote international collaboration on AI safety governance at a global level,” echoes the country’s assertiveness on open-source AI. The document was drafted by China’s National Technical Committee 260 on Cybersecurity, a body working with the Cyberspace Administration of China, whose cybersecurity standard practice guidelines were adopted by CCP in September 2024. The framework reads, “We should promote knowledge sharing in AI, make AI technologies available to the public under open-source terms, and jointly develop AI chips, frameworks, and software.” Appearing in a document meant for global stakeholders, the statement reflects China’s ambition to lead in this area as an advocate.

What about the United States and EU?

In the United States, advocates have touted the benefits of open source for some time, and AI industry leaders have called for the United States to focus more on open-source AI. For example, Mark Zuckerberg launched the open-source model Llama 3.1 last year, and in doing so, he argued that open-source “represents the world’s best shot” at creating “economic opportunity and security for everyone.”

Despite this advocacy, the United States has not established any law to promote open-source AI. A US senator did introduce a bill in 2023 calling for building a framework for open-source software security, but the bill has not progressed since then. Last year, the National Telecommunications and Information Administration published a report on dual-use AI foundation models with open weights (meaning the models are available for use, but are not fully open source). It advised the government to more deeply monitor the risks of open-weight foundation models in order to determine appropriate restrictions for them. The Biden administration’s final AI regulatory framework was friendlier to open models: It set restrictions for the most advanced closed-weight models while excluding open-weight ones.

The future of open-source models remains unclear. US President Donald Trump has not yet created any guidance for open-source AI. So far, he has repealed Biden’s AI executive order, but the executive order that replaced it has not outlined any initiative that guides the development of open-source AI. Overall, the United States has been overly focused on playing defense by developing highly capable models while working to prevent adversaries from accessing them, without considering the wider global reach of those models.

Since unveiling the General Data Protection Regulation (GDPR), the EU has established itself as a regulatory powerhouse in the global digital economy. Across the board, countries and global companies have adopted EU compliance frameworks for the digital economy, including the AI Act. However, the EU’s effort on open-source AI is lacking. Although Article 2 of the AI Act briefly mentions open-source AI as an exemption from regulation, the actual impact seems minor. The exemption is even absent for commercial-purpose models.

In other EU guidance documents, the same paradox can be found. The latest General-Purpose AI Code of Practice published in March 2025 acknowledged how open-source models have a positive impact on the development of safe, human-centric, and trustworthy AI. However, there is no meaningful elaboration promoting the development and use of open-source AI models. Even in the EU Competitiveness Compass—a framework targeting overregulation, regulatory complexity, and strategic competitiveness in AI—“open source” is absent.

The EU’s cautious approach to regulating open-source AI stems from the challenge of defining it. Open-source AI is different from traditional open-source software in that it includes pre-trained AI models rather than simply source code. And, of course, the definition from OSI has not yet been acknowledged in the international legal community. The debate over what constitutes open-source AI creates legal uncertainty that the EU is likely uncomfortable to accept. Yet the real driver of inactivity lies deeper. The EU’s regulatory successes, like GDPR, make the Commission wary of exemptions that could weaken its global influence over a technology still so poorly defined. This is a gamble Brussels has, so far, had no incentive to take.

The new power imbalance in AI geopolitics 

China’s push to become technologically self-sufficient, a push which has included solidifying open-source AI strategies, is partly motivated by US export controls on advanced computing and semiconductors dating back at least to 2018. These measures stemmed from US concerns about national security, economic security, and intellectual property, while China’s countermeasures also reflect the broader strategic competition in technological superiority between both countries. The EU, on the other hand, asserts itself in the race by setting the global norms of protecting fundamental rights and a host of democratic values such as fairness and redistribution, which ultimately have shaped the policies of leading global technology companies.

By positioning itself as a leader in open-source AI, China has turned the export and policy challenge into an opportunity to sell its version of AI to the world. The rise of DeepSeek, along with other domestic rival companies such as Alibaba, is shifting the pendulum by reducing the world’s appetite for closed AI models. DeepSeek has released smaller models with fewer parameters for less powerful devices. AI development platform Hugging Face has started replicating DeepSeek-R1’s training process to enhance its models’ performance in reinforcement learning. Microsoft, OpenAI, and Meta have embraced model distillation, a technique that drew much attention with the DeepSeek breakthrough. China has advanced the conversation around openness, with the United States adapting to the discourse for the first time and the EU being trapped in legal inertia, leaving a power imbalance in open-source AI.

China is offering a concerning version of open-source AI. The CCP strategically deploys a “two-track” system that allows greater openness for AI firms while limiting information and expression for public-facing models. Its openness is marked by the country’s historical pattern that restricts the architecture of a model, such as requiring the input and output to align with China’s values and a positive national image. Even in its global-facing AI Safety Governance Framework (in which Chinese authorities embrace open-source AI), the CCP says that AI-generated content poses threats to ideological security, hinting at the CCP’s limited acceptance of freedom of speech and thought.

Without a comprehensive framework based on the protection of democracy and fundamental rights, the world could see China’s more restrictive open-source AI models reproduced widely. Autocrats and nonstate entities worldwide can build on them to censor information and expression while touting that they are promoting accessibility. Simply focusing on the technological performance of China is not sufficient. Instead, democracies should respond by leading with democratic governance.

Transatlantic cooperation is the next step

The United States and EU should consider open-source diplomacy, advancing the sharing of capable AI models across the globe. In doing so, they should create a unified governance framework and work toward shaping a democratic AI future by forming a transatlantic working group on open-source AI. Existing structures, including the Global Partnership on Artificial Intelligence (GPAI), can serve as a vehicle. But it’s essential that technology companies and experts from both sides of the Atlantic are included in the framework development process.

Second, the United States and EU should, through funding academic institutions and supporting startups, promote the development of open-source AI models that align with democratic values. Such models, free from censorship and security threats, would set a powerful contrast to the Chinese models. To promote such models, the United States and EU will need to recognize that the benefits of such models outweigh the risks in the broader picture. Similarly, the EU must also continue leveraging its regulatory advantage; it must also be more decisive about governing open-source AI, even if it means embracing some uncertainty about its legal definition, in order to outpace China’s momentum.

The United States and EU may currently have a rocky relationship. However, US-EU collaboration rather than competition is crucial with China’s ascendence in open-source AI. To take back leadership in this pivotal arena, the United States and European Union must launch a transatlantic initiative on open-source AI that employs forward-thinking policy, research, and innovation in setting the global standard for a rights-respecting, transparent, and creative AI future.


Ryan Pan is a project assistant at the Atlantic Council GeoTech Center.

Kolja Verhage is a senior manager of AI governance and digital regulations at Deloitte.

The views reflected in the article are the author’s views and do not necessarily reflect the views of their employers.

Further Reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post DeepSeek shows the US and EU the costs of failing to govern AI appeared first on Atlantic Council.

]]>
Air & Space Forces Magazine on the Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/air-and-space-forces-magazine-commission-on-softwre-defined-warfare/ Sat, 29 Mar 2025 03:30:00 +0000 https://www.atlanticcouncil.org/?p=837358 On March 28, Air & Space Forces Magazine published an article by Shaun Waterman titled, “Experts: US Military Needs ‘Software Literate’ Workforce, Not Just Coders.”

The post Air & Space Forces Magazine on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On March 28, Air & Space Forces Magazine published an article by Shaun Waterman titled, “Experts: US Military Needs ‘Software Literate’ Workforce, Not Just Coders.” The piece highlights key recommendations from the final report of Forward Defense’s Commission on Software-Defined Warfare and discussions from its public launch event on March 27.

The report emphasizes the need for a software-literate workforce—not coders, but individuals who can ask the right questions, understand software limitations, and interpret inputs and outputs. This workforce will be essential to truly adopting the Software Acquisition Pathway, which the report recommends modernizing and implementing to achieving both short-term and long-term success in the Pentagon.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Air & Space Forces Magazine on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Chiang, Esper, and Fox published in DefenseNews and C4ISRNET on software-defined warfare https://www.atlanticcouncil.org/insight-impact/in-the-news/chiang-esper-fox-defensenews-c4isrnet-commission-on-software-defined-warfare-report/ Fri, 28 Mar 2025 16:00:00 +0000 https://www.atlanticcouncil.org/?p=837221 On March 28, Mung Chiang, Mark Esper, and Christine Fox published an op-ed highlighting key ideas from the final report of Forward Defense’s Commission on Software-Defined Warfare.

The post Chiang, Esper, and Fox published in DefenseNews and C4ISRNET on software-defined warfare appeared first on Atlantic Council.

]]>

On March 28, Mung Chiang, Mark Esper, and Christine Fox published an op-ed highlighting key ideas from the final report of Forward Defense’s Commission on Software-Defined Warfare. Entitled, “America’s arsenal of democracy needs a software renaissance,” the piece published in DefenseNews and C4ISRNET underscores the critical role of software in future conflicts, “the ability to collect, process and act on data faster than the adversary is critical in prevailing in future conflicts.”

The authors emphasize the Commission’s recommendations, including investing in artificial intelligence enablers, mandating the creation of enterprise data repositories, and shifting toward commercial software acquisition. They argue that by prioritizing data management and commercial software acquisition, the Department of Defense can achieve immediate improvements while laying the groundwork for long-term strategic success.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Chiang, Esper, and Fox published in DefenseNews and C4ISRNET on software-defined warfare appeared first on Atlantic Council.

]]>
ExecutiveGov reports on the Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/jane-edwards-executivegov-commission-on-software-defined-warfare-report/ Fri, 28 Mar 2025 15:00:00 +0000 https://www.atlanticcouncil.org/?p=837299 On March 28, Jane Edwards of ExecutiveGov published an article highlighting the key recommendations from the final report of Forward Defense’s Commission on Software-Defined Warfare.

The post ExecutiveGov reports on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On March 28, Jane Edwards of ExecutiveGov published an article highlighting the key recommendations from the final report of Forward Defense’s Commission on Software-Defined Warfare. Entitled, “Atlantic Council Calls for DOD to Advance Software-Defined Warfare,” the piece discusses the Commission’s suggestions that advanced software capabilities could elevate the Pentagon’s efficiency, effectiveness, and capacity. 

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post ExecutiveGov reports on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Breaking Defense reports on the Commission of Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/carly-welch-breaking-defense-commission-on-software-defined-warfare-report/ Thu, 27 Mar 2025 16:00:00 +0000 https://www.atlanticcouncil.org/?p=837384 On March 27, Carly Welch of Breaking Defense published an article featuring key recommendations made in the final report of Forward Defense’s Commission on Software-Defined Warfare.

The post Breaking Defense reports on the Commission of Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On March 27, Carly Welch of Breaking Defense published an article titled, “Experts warn Pentagon to embrace software-defined warfare to counter China’s military advantage.” The piece features key recommendations made in the final report of Forward Defense’s Commission on Software-Defined Warfare, emphasizing the urgent need for the Department of Defense to modernize its approach to software and data management. Welch underscores the Commission’s concerns that without swift action, the US could risk losing its technological edge over China. 

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Breaking Defense reports on the Commission of Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Atlantic Council Commission on Software-Defined Warfare: Final report https://www.atlanticcouncil.org/in-depth-research-reports/report/atlantic-council-commission-on-software-defined-warfare/ Thu, 27 Mar 2025 13:00:00 +0000 https://www.atlanticcouncil.org/?p=830221 The Atlantic Council Commission on Software-Defined Warfare presents a software-defined warfare approach, offering recommendations for the DoD to adopt modern software practices and seamlessly integrate them into existing platforms to enhance and strengthen defense strategies.

The post Atlantic Council Commission on Software-Defined Warfare: Final report appeared first on Atlantic Council.

]]>

Table of contents

Recommendations:

  1. Mandate data and invest in AI enablers
  2. Ensure software interoperability and integration
  3. Modernize test and evaluation infrastructure
  4. Enforce commercial as the default approach for software
  5. Transform DoD software requirements
  6. Remove all restrictions on software funding
  7. Measure what matters for DoD software
  8. Enable software talent across the enterprise
  9. Fully establish a DoD software cadre

Executive summary

A profoundly transformed global security environment presents the United States with its most significant geopolitical and geoeconomic challenges since the Cold War—and perhaps since World War II. China, Russia, Iran, and North Korea—together a new “axis of aggressors”—are increasingly collaborating to support their revisionist geopolitical goals and challenge global stability. Meanwhile, US domestic constraints—such as relative-to-inflation flat defense budgets, military recruitment and talent shortfalls, byzantine acquisition processes, and inadequate industrial capacity—severely limit the US ability to adequately deter and address these threats at speed and scale. 

During World War II, US industrial strength and manufacturing capacity decisively factored into the Allies’ victory. Today, however, US defense production capacity falls short of potential wartime demands. In contrast, China’s industrial policies, manufacturing prowess, and strategic focus on software-defined technologies—including artificial intelligence (AI), cloud computing, and development, security, and operations (DevSecOps)—have propelled Beijing to rapidly advance its defense capabilities. 

Maintaining the Department of Defense (DoD) status quo—anchored to a defense acquisition system ill-suited to the rapid tempo of modern technological innovation—places the United States at significant risk. This approach undermines the nation’s ability to effectively deter near-peer adversaries in the short term and jeopardizes its capacity to prevail in a major conflict. 

Addressing these systemic challenges demands a sustained, long-term effort. Meanwhile, there is an urgent need for near-term, high-impact initiatives to bridge existing capability gaps and reestablish an advantage. That is what this report’s concept of software-defined warfare presents. 

Final Report

Report authors: Whitney M. McNamara, Peter Modigliani, and Tate Nurkin

Co-chairs: Mung Chiang, Mark T. Esper, and Christine H. Fox

Commission director: Stephen Rodriguez
Program director: Clementine G. Starling-Daniels
Commission staff: Mark J. Massa, Curtis Lee, Abigail Rudolph, Alexander S. Young

Commissioners

Mung Chiang, president, Purdue University; co-chair of the Commission on Software-Defined Warfare, Atlantic Council  

Mark T. Esper, board director, Atlantic Council; 27th secretary of defense; co-chair of the Commission on Software-Defined Warfare, Atlantic Council

Christine H. Fox, former acting deputy secretary of defense; senior fellow, John Hopkins University Applied Research Laboratory; co-chair of the Commission on Software-Defined Warfare, Atlantic Council 

Steve Bowsher, president, chief executive officer, In-Q-Tel

General James E. Cartwright, USMC (ret.), board director, Atlantic Council; 8th vice chairman, Joint Chiefs of Staff

General Joseph F. Dunford, Jr., USMC (ret.), board director, Atlantic Council; 19th chairman, Joint Chiefs of Staff

Frank A. Finelli, managing director, The Carlyle Group

James “Hondo” Geurts, distinguished fellow, Business Executives for National Security; former assistant secretary of the Navy for Research, Development, and Acquisition, US Department of Defense

Susan M. Gordon, former principal deputy director of national intelligence 

Lieutenant General S. Clinton Hinote, USAF (ret.), former deputy chief of staff, Air Force Futures

Paul Kwan, managing director, Global Resilience Practice, General Catalyst

Ellen M. Lord, former under secretary of defense for acquisition and sustainment, US Department of Defense

John Ridge, CBE, chief adoption officer, NATO Innovation Fund

Nadia Schadlow, senior fellow, Hudson Institute; former US deputy national security advisor for strategy

Lieutenant General Jack Shanahan, USAF (ret.), former director, Joint Artificial Intelligence Center

Trae Stephens, general partner, Founders Fund

Admiral Scott H. Swift, USN (ret.), 35th Commander, US Pacific Fleet

Industry commissioners

Rob Bassett Cross MC, founder, chief executive officer, Adarga; nonresident senior fellow, Atlantic Council 

Prashant Bhuyan, founder, chief executive officer, Accrete AI 

Michael D. Brasseur, chief strategy officer, Saab, Inc.

Todd Bryer, vice president for strategic growth, CAE 

Jordan Coleman, chief legal and policy officer, Kodiak Robotics 

Scott Cooper, vice president, Government Relations, Peraton

Steven Escaravage, president, Defense Technology Group, Booz Allen Hamilton

Jon Gruen, chief executive officer, Fortem Technologies 

Adam Hammer, co-founder, chief executive officer, Roadrunner Venture Studios

Jags Kandasamy, co-founder, chief executive officer, Latent AI 

Rob Lehman, co-founder, chief commercial officer, Saronic Technologies

Joel Meyer, president of public sector, Domino

Sean Moriarty, chief executive officer, Primer AI

Nathan Parker, chief executive officer, Edge Case Research

Gundbert Scherf, co-founder & co-chief executive officer, Helsing

Zachary Staples, founder & chief executive officer, Fathom5

Tyler Sweatt, chief executive officer, Second Front Systems

Dan Tadross, head of federal delivery, Scale AI

Jim Taiclet, chairman, president & chief executive officer, Lockheed Martin 

Chris Taylor, founder, chief executive officer, Aalyria Technologies

Mark Valentine, president, Global Government, Skydio

Advisors

Lieutenant General Michael S. Groen, USMC (ret.), former director, Joint Artificial Intelligence Center

Rob Murray, nonresident senior fellow, Scowcroft Center for Strategy and Security, Atlantic Council

Major General Arnold L. Punaro, USMC (ret.), advisory council member, Scowcroft Center for Strategy and Security, Atlantic Council

Stu Shea, managing partner and strategic advisor, Shea Strategies, LLC

Foreword

The United States stands at the threshold of a new era in defense and national security. Dramatic changes in the global security environment are upending the established world order, presenting new and unexpected challenges. The war in Ukraine, conflict in the Middle East, and rising tensions in the Indo-Pacific underscore shifting power dynamics. At the same time, we are in an age marked by an escalating pace of technological change. Innovations such as the fusion of AI, autonomy, and robotic systems are poised to profoundly influence national security and economic power. This moment demands decisive action to prepare the US military to adapt swiftly to evolving conditions and reclaim its tactical, operational, and strategic advantages. 

An impartial assessment of global geopolitics and geoeconomics reveals significant and widening gaps in US capabilities. These gaps not only undermine deterrence but also place the ability of US military forces to prevail in future conflicts at risk. The shifting geopolitical landscape exposes vulnerabilities in the nation’s approach to capability design, development, fielding, and sustainment. Addressing these gaps is imperative to prepare for emerging threats, yet immediate solutions are also needed to confront present dangers. While the principle of “speaking softly and carrying a big stick” has long guided US foreign policy, it is now imperative that US military power and economic strength are capable of deterring potential adversaries and, if deterrence fails, prevailing in conflict. Software-defined warfare presents a vital opportunity to bridge these challenges, providing a pathway to both near-term readiness and long-term competitive advantage. 

A software-defined mindset and capabilities are essential to modern military readiness. From enterprise solutions to autonomous systems to personnel, software underpins the effectiveness of defense operations. However, Industrial Age, hardware-centric acquisition processes are unsuitable for software systems that need to be updated with the rapid cycle of technological advancement. To preserve its competitive advantages, the DoD must embrace a more agile and integrated approach to software—one that fosters continuous modernization, capitalizes on cutting-edge commercial innovations, and deepens collaboration with allies and partners. 

The Atlantic Council’s Commission on Software-Defined Warfare was convened to address these challenges and identify solutions. Comprising leaders from government, industry, and academia, the commission identified clear, actionable, and meaningful recommendations that will position the DoD for enduring success. This report’s roadmap is organized around three core pillars: technology, process, and people. The recommendations outlined herein propose actionable steps to shape software investments, build a cohesive digital ecosystem, modernize software development practices, and cultivate a skilled and sustainable workforce. Together, these recommendations provide a clear pathway to establishing a software-defined DoD capable of responding rapidly and effectively to emerging threats in an increasingly dynamic security environment. 

As we present these recommendations, we acknowledge the support and insights of the many contributors who have helped shape this vision. We believe this work will provide leaders with the tools and direction needed to build a DoD that is resilient, innovative, and more fully prepared for the future. Now is the time to build a modern, software-defined defense infrastructure to ensure the safety and security of the United States. 

Mung Chiang

President, Purdue University

Mark T. Esper

27th United States secretary of defense

Christine H. Fox

Former acting deputy secretary of defense

Overview

Enterprise challenges

The commission started with a vision for what the future of software-defined modernization and warfare could look like if optimized. Striving to go beyond diagnosing the challenges facing the DoD enterprise, this commission outlined desired outcomes to help the DoD overcome such challenges.

  1. There is an absence of DoD enterprise processes and enablers that rapidly update software with novel capabilities that keep pace with threats.  
  2. The DoD has limited processes or proving grounds to allow end users to experiment with, and rapidly adopt and scale, novel software solutions, including AI and autonomy-enabled systems.
  3. The DoD lacks established best practices for developing or buying software.  
  4. The industry faces challenges in providing and deploying its capabilities due to a lack of transparency and predictability, and other bureaucratic hurdles.  
  5. There is a major shortfall of software pipelines, talent, and resources to meet the demand for software-defined warfare within DoD organizations. 
  6. Systems, capabilities, and platforms are generated in silos. This hinders the integration of systems on the battlefield, creation of an interoperable force structure, and the DoD’s goal of a joint warfighting concept, as well as partner and allied collaboration.  
  7. The absence of a software-centric culture across the DoD impedes the employment of modern DevSecOps, which fosters rapid iterations

Top recommendations

To address these challenges, the Commission recommends that DoD leaders, congressional defense committees, and other executive branch agencies take the following ten high-priority actions to accelerate DoD innovation adoption:

  1. Mandate enterprise data and invest in AI enablers
  2. Ensure software interoperability and integration
  3. Modernize test and evaluation infrastructure
  4. Enforce commercial as the default approach for software
  5. Transform DoD software requirements
  6. Remove all restrictions on software funding
  7. Measure what matters for DoD software
  8. Enable software talent across the enterprise
  9. Fully establish a DoD software cadre

Recommendation 1: Mandate enterprise data and invest in AI enablers

  • The deputy secretary of defense should direct the Chief Digital and Artificial Intelligence Office (CDAO) to track enterprise-wide progress and recommend actions to the deputy secretary and vice chairman of the Joint Chiefs of Staff to accelerate DoD-wide adoption of data best practices. The CDAO should ensure this process prioritizes collecting and categorizing data in a way that makes high-priority data sources readily usable for analysis and refinement for AI training, functional, and operational pipelines. 
  • Resource the CDAO to acquire and sustain unified, shared platforms that support and accelerate the end-to-end development, deployment, and governance of AI solutions—including Machine Learning Operations capabilities, tools for developing, deploying, and reusing models, and reusable AI-ready datasets. 
  • CDAO should consider the best strategy to make these tools accessible to the end-user community across innovation organizations, services, and combatant commands (CCMDs) to empower users to operationalize AI to solve mission-critical problems.  
  • Services should designate a CDAO liaison that helps the services discover what is available to them at the CDAO repository and identify gaps in service-specific investments to ensure department-wide investments are not redundant and better streamline demand for new capabilities.  
  • Service Chief Information Officers (CIOs), in collaboration with the CIO, should be resourced to invest in AI enablers that are domain- and service-specific, and in which the CDAO is unlikely to invest.  
  • Both the CDAO and the services should maintain unclassified and classified datasets of highly relevant DoD use cases that are available for industry to use to demonstrate capability viability.

Success measure: DoD end users are empowered to leverage their domain expertise to experiment with and operationalize robust and governed AI pipelines with best-of-breed capabilities from the industry. AI adoption can be scaled faster and more efficiently because capabilities are built with scale and reproducibility in mind. The DoD saves money by not buying the same capabilities many times over. There is better coordination and transparency across the department on AI adoption and resourcing. 

Notional example: The Army’s 101st Airborne Division realizes the potential of an AI use case for automatic target recognition. Instead of building something from scratch, leadership first engages the CDAO and Army CIO shop to determine what AI pillars are available to them. Using these foundational tools, operational experts spend their time addressing their specific operational problems and experimenting with integrating these new capabilities into their existing decision-making processes. Once it reaches a minimum viable product (MVP), senior leadership makes plans to integrate the capability to be part of Next Generation Command and Control (NGC2), or C2 Next. 

Recommendation 2: Ensure software interoperability and integration

  • To ensure interoperability between new capabilities being adopted, service CIOs, in coordination with the DoD CIO, should mandate 
    • Modular Open Systems Approach (MOSA) frameworks applied to the maximum extent practical; 
    • defining modules and leveraging Application Programming Interfaces (APIs) and modular system interfaces to enable data interchange between disparate platforms;  
    • industry and government co-collaborated reference architecture for multi-vendor environments as a best practice; 
    • industry, where possible, ensuring the capabilities it provides to different parts of the DoD can interoperate with one another; and
    • when feasible, reference architectures are shared with allies and partners to streamline coalition interoperability.  
  • To aid in interoperability with allies and partners, these best practices should be shared as early and often as possible with partners through existing allied technical exchanges.
  • Service chiefs should designate one Program Executive Office (PEO) to
    • Consolidate the development, acquisition, management, and modernization of non-proprietary mission integration tools under a dedicated program office within the designated PEO shop to elevate the role of mission integration. 
    • The designated PEO should leverage simulation tools to imitate the feasibility of the technical integration to 
      • ensure the successful integration of new and legacy systems, including the use of open-computer architecture to facilitate the deployment of capability on associated hardware;  
      • create demand signals for software mission integration tools; and 
      • identify new software-enabled capabilities that can enable SoS warfare.  

Success measure: Services are incentivized to proactively establish open compute requirements and identify seams between capabilities that would prevent them from carrying out their highest-priority missions and creating acquisition pathways for mission integration tools. 

Notional example: The Navy’s PEO for integrated warfare systems (IWS) is designated as the Navy’s “effects” organization. PEO for IWS identifies three relevant operational problems and begins simulating and combining existing force structures to address them. IWS 1.0 stands up with the authority to procure and sustain mission integration tools identified during simulation exercises, as well as to capture Tactics, Techniques, and Procedures (TTPs) in which end users creatively overcome inorganic integration.

Recommendation 3: Modernize test and evaluation infrastructure

  • In partnership with CDAO and the Defense Innovation Unit (DIU), charge the Test Resource Management Center (TRMC) and resource it effectively to provide the digital infrastructure to provide developmental and operational testing proving grounds for innovation organizations leading on commercial software adoption. 
  • The TRMC should partner with industry to explore metrics for vendor self-certification for both test and evaluation (T&E) and verification and validation (V&V) for more mature vendors that have invested in their own state-of-the-art capabilities. This measure will both alleviate the department being a bottleneck to deployment and help to rapidly deploy capabilities that have met the required T&E thresholds co-developed by the TRMC.  
  • The TRMC, in partnership with innovation organizations and Office of the Secretary of Defense (OSD) leaders, should establish joint operational testing and development testing teams that share data, analysis, and tooling across development and deployment stages. This approach should reduce barriers, streamline the test process, and provide continuous system performance improvement, while also incentivizing a DevSecOps pipeline for T&E that is informed by and applies industry best practices for enterprise scalability, advanced analysis, and data sharing. 

Success measure: Simulating capability viability becomes a widely accessible and organic part of validating and testing digitally enabled technologies. In addition, metrics are established to drive progress toward the automation of qualification processes and alternative certification paths. This adoption helps create a pipeline that rapidly scales the deployment of robust and trusted software-defined capabilities. 

Notional example: The TRMC invests in digital infrastructure focused on testing drones’ ability to swarm to overwhelm enemy defenses. The DIU uses this infrastructure to quickly validate compelling candidates for its Commercial Service Openings submissions rapidly and iteratively. The initial testing helps identify existing deficiencies—potentially including adversarial embedded code in a commercial component—as well as best practices for managing the data flows required to monitor the performance of these capabilities, and cross-functional teams organized to begin addressing the problem. 

Recommendation 4: Enforce commercial as the default approach for software

  • Requirements, acquisition, and contracting executives install checkpoints in the early phases of software-intensive programs to enforce statutory preferences for commercial software. Require added justification and approvals to pursue a non-commercial software solution. 
  • Service Chief Technology Officers (CTOs) and the DIU align DoD and industry groups to provide enterprise market intelligence and due diligence for in-depth insights into the commercial software market and include those of allies and partners. Service CIOs and the DIU should leverage or establish a platform to share these insights. These offices should publish and maintain a clearer software total addressable market (TAM) by technology segments. This roadmap should outline how they plan to leverage software as part of their annual budget documents to better incentivize and shape industry research and development. This TAM should map to commercial TAMs to identify dual-use or DoD-unique software. 
  • Update Department of Defense Instruction (DODI) 5000.87 on the software acquisition pathway and related acquisition policies and regulations to require program managers and contracting officers to capture in software acquisition and contracting strategies that they pursued commercial solutions to the maximum extent practicable. This should include  
    • engaging industry, industry-focused organizations, and consortia to communicate their needs and understand existing solutions;  
    • capturing holistic timelines and costs of buying commercial solutions compared to developing new software (contracting, acquisition, development, integration, test, and updates); 
    • ensuring contracting requirements are captured in a manner that would not preclude viable commercial solutions as partial or whole solutions to address the capability needs; 
    • ensuring contract strategies do not preclude commercial solutions and that they enable leading software vendors and nontraditional defense companies to compete; 
    • enabling DoD users and industry to rapidly demonstrate, prototype, and experiment with commercial solutions for defense applications; 
    • working with testers and certifiers to understand cybersecurity, integration, and other factors to assess the risks and processes of using the software in the defense domain; 
    • ensuring prime contractors and subcontractors default to commercial solutions; 
    • identifying how modular open systems, common interfaces, and standards are leveraged; 
    • publishing the non-commercial item determination in the solicitations for custom software development to allow vendors to appeal that decision, if justified; 
    • ensuring realistic intellectual-property (IP) strategies avoid unrealistic demands for source code while enabling the DoD to update or pivot if costs or performance are unsuitable; 
    • having acquisition sponsors provide supporting justification if commercial solutions are not viable and new development is warranted; and 
    • ensuring requirements and acquisition approving officials or boards must validate the commercial solution analysis early in the process.
  • The services, in collaboration with the defense acquisition executive, Defense Acquisition University, DIU, and the CDAO, should expand guidebooks and training for acquisition and requirements professionals on effectively leveraging commercial software. These protocols should be maintained online and regularly updated with insights and resources from across the DoD, government, and industry. They shall include the documentation and compliance tasks avoided by using commercial software. Program offices and portfolio executives should provide regular inputs to guide the community on best practices, lessons learned, and adoption metrics. 
  • Service CTOs, in partnership with the DIU and the Office of the Under Secretary of Defense for Research and Engineering, should meet quarterly to review software research and development efforts by science and technology (S&T) organizations to minimize duplication with the commercial sector. They should also incentivize organizations charged with developing concepts of operations to do so collaboratively, based on consistent industry engagement, to understand the state of play in commercial technologies that can be leveraged for warfighting missions. CTOs and CIOs should have authority to work with the PEOs to co-direct software factory funding. This authorization will ensure the factories focus on the intended objectives and can achieve the performance metrics developed per the Software Modernization Implementation Plan. Based on a clear inventory of platforms, services, and personnel, the CTOs and CIOs, in partnership with the PEOs, should adjust investments that maximize efficiencies and effectiveness. These adjustments could include reducing personnel billets and increasing software licenses. These factories should enable increased speed and quality of deploying code to various environments while maximizing interoperability and cybersecurity. PEOs, CTOs, and CIOs should hold software factory leadership accountable to continuously improve performance metrics and enable software-intensive acquisition programs and operations on the tactical edge. Similarly, the CTOs and CIOs should be accountable to continuously improve enabling policies, resources, authorities to operate, and reciprocity across organizations and the services. 

Success measure: The DoD identifies and tracks commercial software acquisition metrics and TAM. The DoD demonstrates a significant increase in commercial software usage, particularly for systems with well-bounded, government-defined modular system interfaces. This approach improves system cost, schedule, and performance.  

Notional example: One of the Army’s autonomy programs deviates from its strategy of a lengthy government-developed autonomy stack and rapidly acquires commercial software from leading vendors. The program saves years in development and millions in costs, while delivering higher-quality software to operations faster. 

Recommendation 5: Transform DoD software requirements

  • The DoD should exempt all software requirements below the Major Defense Acquisition Program thresholds from the Joint Capabilities Integration and Development System (JCIDS) approval processes. This exemption should include requirements for new software capabilities and software upgrades to legacy systems, regardless of the acquisition pathway used. 
  • Service requirements organizations—in collaboration with Joint Staff J8 forces, acquisition executives, and software leaders—should establish separate, yet complementary, structures, processes, and training to manage software requirements in a streamlined, dynamic, and collaborative environment.
    • While a high-level document might be used to capture initial operational capability needs, the bulk of software requirements will be managed via dynamic backlogs with active stakeholder engagements.  
    • Policies should delineate hardware and software requirements and enable each to operate on separate timelines and processes. When capabilities reach appropriate maturity levels during system development, use integrated hardware-software testing, digital engineering, modeling, and simulation to verify desired system performance. 
    • Requirements should enable operational agility measured in days and weeks, tailoring for both global and regional needs across the full range of military operations, and should enable operational commands to define and tailor capabilities based on edge-generated data, while providing insight to service software capabilities.  
  • Service requirements organizations should update policies to require sponsors to provide written justification in an appendix to the requirements document or a companion document, demonstrating that they pursued commercial solutions to the maximum extent practicable. This includes identifying how the requirements community, through the acquisition community, actively engaged industry and the DoD S&T ecosystem to 
    • communicate operational needs, challenges, and environments;  
    • understand what commercial solutions exist, the existing applications of these solutions, and the emerging software capabilities that could have military applications; 
    • capture requirements in a manner that would not preclude viable commercial solutions as partial or whole solutions to address the capability needs; and 
    • foster discussions between the DoD and industry to reduce barriers to buying commercial solutions.

Success measure: Each of the military services update their software requirements processes to enable greater speed, agility, and quality. Updated training, guidance, and resources enable the requirements and acquisition communities to successfully adopt modern software practices. 

Notional example: A major weapons system was unable to detect or react to adversary drones in theater. Through a dynamic software requirements process, this threat becomes the top priority for the next software release. The vendors work closely with operators and testers to rapidly iterate on software upgrades that drastically improve mission operations within weeks.  

Recommendation 6: Remove all restrictions on software funding

  • The DoD should immediately discontinue the Budget Authority-8 pilots and implement the pilot intent. 
  • The DoD comptroller, in collaboration with service comptrollers and congressional appropriations staff, should update the Financial Management Regulation (FMR) to enable the DoD to acquire, update, operate, and sustain software capabilities with available Research, Development, Test, and Evaluation (RDT&E), procurement, or Operation and Maintenance (O&M) funding appropriated for the capability. This echoes the congressionally directed Planning, Programming, Budgeting, and Execution (PPBE) Reform Commission’s recommendation 11A.
  • The DoD comptroller should issue a policy memo for immediate action and clarification while adding these changes to the ongoing comprehensive FMR updates per the PPBE Reform Commission.  
  • DoD and service comptrollers should communicate guidance on implementing the changes across the workforce. 
  • The language would enable any funding appropriated for a software capability to be used regardless of the software activities (e.g., new development versus maintenance) or how it is acquired (e.g., development, Commercial Off the Shelf (COTS), or as a service). This new language should enable 
    • rapid acquisition and delivery of leading software capabilities;
    • improved responsiveness to changes in threats, operations, and technologies; and 
    • reduced operational, cybersecurity, and programmatic risks. 

Success measure: The DoD comptroller issues a software funding directive removing appropriation restrictions and provides clear direction to the workforce on flexible software funding execution. 

Notional example: To meet a critical operational requirement, a program explores a range of software acquisition and contracting strategies unburdened by the mix of funding appropriations.  

Recommendation 7: Measure what matters for DoD software

  • The acquisition executives’ staff should collaboratively develop new software metrics for most acquisition programs. PEOs, services, agencies, and the OSD should compile and share quarterly or annual reports across the DoD workforce and leadership to provide visibility into trends, best practices, and enterprise issues to drive regular discussions and actions on how to accelerate delivery. These metrics often identify program trends and issues to drive corrective action and continuous improvement. The Navy’s PEO Digital established World-Class Alignment Metrics (WAMS), which are a model for others to follow. These reports should include the following metrics. 
    • Deployment frequency: The number of software updates deployed to the operational environment (production) in the last year (or time between deployments). Goal: more than once per week. 
    • Time to initial deployment: Time from the initiation of software development to the date the initial software capabilities are deployed to an operational environment. 
    • Automated testing use and timelines: Program and portfolio use of automated testing and testing timelines. Goal: daily automated testing, development and operational testing timelines declining.
    • Mean times to restore (MTTR): The average amount of time it takes to address a critical vulnerability or issue, including testing, certifying, and authority to operate. Goal: less than one day. 
    • API use: Total API usage each week or month to enable interoperability and data sharing across applications. Goal: increasing usage each month.
    • Production software defect density: Defect density of production software in operations each month. Goal: heavily domain dependent.
    • Security vulnerabilities: Number of security vulnerabilities identified and remediated. Goal: heavily domain dependent.
    • Change failure rate: Percentage of software changes that resulted in system disruptions, including downtime, errors, or negative impacts on users. Goal: less than 10 percent and heavily domain dependent.
    • Customer satisfaction: Quantitative metrics or qualitative value assessments of customer satisfaction.  Goal: greater than 80 percent of customers rate software high value.
    • User engagement: Number of user engagements per month by software developers. Goal: end users engaged weekly.
    • Software reuse: Number of acquisition programs able to reuse software capabilities and infrastructure. Goal: increasing reuse each month.
  • The focus of the metrics and subsequent actions at the program, portfolio, and enterprise levels is to continuously deliver impactful software to the user communities to improve mission impact. Each program and organization might have different objectives or challenges to address, such as release velocity, software quality, or user satisfaction. Some of these may have competing forces that must be managed (e.g., quality vs. speed). Defense of the Realm Act’s annual Accelerate State of DevOps report provides industry-leading metrics for software, including levels for elite, high, medium, and low performance. The DoD should strive toward these commercial goals as objectives and tailor performance levels to unique DoD environments. 
  • Major programs and software-intensive portfolios should map out the processes to develop, test, certify, and deploy software, including actual timelines for each phase; key stakeholders involved (by name or organization); key bottlenecks; the opportunities to streamline software delivery timelines; and how stakeholders are accountable to accelerate software delivery speed, manage operational and development risks, and ensure high-quality and secure software. Furthermore, programs and portfolios should identify where additional resources (personnel, tools, and services) at a program, portfolio, or enterprise level would enable speed of delivery. These metrics are more for internal DoD operations, with a subset that might be shared with Congress or publicly. 

Success measure: The military services and related organizations track, share, and use a core set of software metrics across the defense enterprise and leverage insights for key decisions, investments, and continuous improvement in speed, quality, reuse, and user satisfaction (mission impact).  

Notional example: A PEO of a software-intensive portfolio has an online dashboard of software metrics that is integrated into program and portfolio operations. Program, portfolio, and policy decisions are made based on these metrics, with the workforce culture focused on leaning out processes and barriers to enable rapid, iterative, and quality software deliveries to operations. Acquisition professionals and vendors are incentivized to continuously improve.  

Recommendation 8: Enable software talent across the enterprise

  • Develop an extensive, connected, layered, and modular software-centric training program that involves both digital and in-person learning and incorporates the specific requirements of different roles and missions across the force. The objective of this effort is to increase awareness of the importance of software to DoD operations, instill a basic to intermediate-level understanding of commercial software best practices and agile software development and their value, and build the skills required to more effectively integrate and operate software in specific roles.  
  • Specifically, the DoD should do the following. 
    • Partner with leading academic institutions in software development to create a curriculum for an approximately week-long in-person or hybrid training course tailored to senior leaders in the DoD. This executive training curriculum should concentrate on commercial software development best practices and the importance of software to mission execution for senior leaders in the DoD. Training emerging and current senior leaders on these topics can help the DoD develop leaders more willing to create the conditions and culture that will facilitate accelerated adoption.  
    • Leverage and expand existing successful mechanisms and models for software training, such as the Army Software Factory, and access to digital training libraries at both non-DoD and DoD academic institutions.
    • Defense education institutions across the DoD should enrich training to deepen understanding of the importance of software, commercial software best practices and development approaches, and integration of software into DoD activities. The course curriculum should engage and harness insights from leading software experts in industry, as well as in academia, to determine the skill sets and knowledge bases most relevant to the defense context. 
  • In addition to enhancing software literacy through training, the DoD needs to scale formal software career fields and paths for military and civilian personnel to harness the software talent for new and expanded roles. For example, in February 2024, the Air Force reestablished warrant officers for information technology (IT) and cyber career fields to improve technical expertise in cyber and information technologies.  
  • As part of this effort, the DoD should increase opportunities for identified DoD software-focused professionals to interact with both traditional defense industry companies and commercial companies involved in developing software for the DoD. This should include, but not be limited to, embedding DoD talent in these companies for several months to gain firsthand experience in software development cycles and challenges associated with software acquisition. The ability to engage more closely with commercial industry should also extend to the CCMDs, which should expand opportunities for operators to train and experiment directly with commercial industry through exercises such as the Army’s Scarlet Dragon, among others.  

Success measure: The DoD increases software and technical literacy across the enterprise through scalable training tailored to different DoD levels and roles. The DoD creates opportunities for the identification, enhanced training, and deployment of software talent that can be deployed across the organization to drive software adoption and use.  

Notional example: A Navy officer with demonstrated software competence is placed in a leading commercial software company that supports the DoD on a six-month rotation or internship. The officer learns from product developers and product managers to understand commercial development and improvement processes and brings this knowledge back to help operators in a CCMD more efficiently and effectively operate software-defined capabilities. 

Recommendation 9: Fully establish a DoD software cadre

  • The DoD should recruit fifty to one hundred experienced software engineers in modern development environments and place them in key roles across the enterprise. These individuals’ expertise will be used to inform decision-makers on software pipelines, architectures, and leading commercial solutions. They can address key software issues and guide efforts to develop software requirements, acquisition strategies, integration, certification, and employment of software. They can be placed in prominent roles across the DoD, including program management offices and portfolios responsible for acquiring software capabilities; CIO, software factories, and AI and data organizations focused on enterprise services; in operational commands that need to rapidly iterate on tactics and software upgrades; and as executives who oversee major programs, shape budgets, and lead combat operations. Members of this cadre would operate as a network, potentially rotating and surging to meet prioritized problems related to software acquisition, integration, and employment, and sharing best practices and insights.
  • Candidates can be hired in a full-time role using existing hiring authorities such as Highly Qualified Experts. They can also be engaged on a temporary or episodic basis through commercial talent exchange programs such as CDAO’s AI and Data Acceleration program or through Search Generative Experiences to provide iterative specialized services for a restricted number of days throughout the year. The services should also implement direct commissioning of willing experienced software engineers in the reserves, up to and including the general officer level (as is done for specialized roles such as doctors and lawyers) and should also identify and engage leading software talent already serving in the reserves, similar to the Marine Innovation Unit approach. Programs like GigEagle help identify talent in the reserves for short-term problem sets. By leveraging reservists throughout the year, the DoD can capitalize on existing expertise while mitigating financial and professional risks for those working with the DoD. 
  • Increasing reliance on short-term commercial or reservist software talent will necessitate a review and refinement of conflict-of-interest rules to balance the need to protect the DoD from the risk of providing companies unfair advantages and the need to make it easier for top-level talent to move between the DoD and the commercial sector. 
  • In addition to meeting current demand, the DoD should partner with academic institutions to develop talent pipelines of individuals who are educated and certified in commercial software processes and engineering as well as in the DoD processes and requirements. The DoD should work with interested institutions to develop curriculum and certification criteria that will allow students to be fast-tracked into the DoD software cadre positions.  

Success measure: The DoD successfully recruits an increased number of software experts and solutions architects over the next two years to advise on software development, acquisition, and adoption within program offices and CCMDs in particular, while also building a pipeline of software-focused talent. 

Notional example: Cadre members placed in program offices use their expertise to understand the significance of decisions a vendor has made in its software development process and inform program managers and acquisition officers on the implications that development decisions hold for future integration and certification. This guidance allows acquisition professionals to make decisions better informed by downstream considerations, reducing costs and time associated with integration, certification, and upgrading of critical software systems. 

Conclusion

The commission’s report presents clear, actionable recommendations and outlines the desired outcomes to address a critical aspect of modern defense and security. While the adoption of software-defined warfare currently poses a challenge, it is also an area of a defining opportunity. The rapidly shifting geopolitical landscape, marked by an axis of aggressors, demands immediate and decisive action to maintain US strategic advantage. If these recommendations are fully implemented, the United States will possess a modern, agile, and resilient defense infrastructure that is capable of fostering a robust software foundation that will bolster the capabilities of US hardware, while streamlining interoperability across services, allies, and partners. However, failure to act will leave the nation vulnerable and unable to adequately adapt to rapidly evolving threats. The time to act is now—while the United States prepares for the challenges of tomorrow, software-defined warfare provides a timely and practical solution to strengthen US defense capabilities today. Leaders in the DoD, Congress, and the private sector should work to implement these recommendations with a sense of urgency—the members of this commission stand by to help them do so. At stake is nothing less than the stability of the US-led, rules-based international order and the decades of unprecedented peace and prosperity it has undergirded. 

About the authors

Mung Chiang

Board director and co-chair of the commission, Atlantic Council; president, Purdue University

Mung Chiang is the president of Purdue University and the Roscoe H. George distinguished professor of electrical and computer engineering. Prior to being elected university president in 2022, he was the John A. Edwardson dean of the college of engineering and executive vice president for strategic initiatives at Purdue University.

Chiang received his BS (1999), MS (2000) and his PhD (2003) from Stanford University and an honorary doctorate (2024) from Dartmouth College. Before 2017, Chiang was the Arthur LeGrand Doty professor of electrical engineering and an affiliated faculty in computer science and in applied mathematics at Princeton University.

He founded the Princeton EDGE Lab in 2009 and co-founded several startup companies and industry consortia since the early years of edge computing. Most of his twenty-six US patents are licensed for network deployment. He co-authored two textbooks based on massive open online courses: Networked Life (2012) and Power of Networks (2016). For his research in communication networks, wireless technology, and network optimization, he received the NSF Alan T. Waterman Award (2013), as well as the IEEE Founders Medal (2025), the IEEE INFOCOM Achievement Award (2022), the IEEE Kiyo Tomiyasu Award (2012), and the Guggenheim Fellowship (2014). He was elected to the American Academy of Arts and Sciences (Class of Mathematical and Physical Sciences 2024), the National Academy of Inventors (2020) and the Royal Swedish Academy of Engineering Sciences (2020).

In 2020, as the Science and Technology adviser to the US secretary of state, Chiang initiated tech diplomacy programs in the US government. In 2024, he started serving on the inaugural board of the US Foundation for Energy Security and Innovation, and was elected to the Board of Directors of the US Olympic and Paralympic Committee as an independent director.

Mark T. Esper

Board director and co-chair of the commission, Atlantic Council; 27th US secretary of defense

Mark T. Esper served as secretary of defense from 2019-2020, and as secretary of the army from 2017-2019. A distinguished graduate of West Point, he spent twenty-one years in uniform, including a combat tour in the Gulf War. Esper earned a PhD from George Washington University while working on Capitol Hill, at the Pentagon as a political appointee, and as a commissioner on the US-China Economic and Security Review Commission. He was also a senior executive at a prestigious think tank, two business associations, and a Fortune 100 technology company. Esper is the recipient of multiple civilian and military awards. He currently sits on several public policy and business boards. 

Christine H. Fox

Board director and co-chair of the commission, Atlantic Council; former acting deputy secretary of defense

Christine Fox is a senior fellow at Johns Hopkins Applied Physics Laboratory (JHU/APL). Previously, she was the assistant director for policy and analysis at JHU/APL, a position she held from 2014 to early 2022. Before joining APL, she served as acting deputy secretary of defense from 2013 to 2014 and as director of Cost Assessment and Program Evaluation (CAPE) from 2009 to 2013. As director of CAPE, Fox served as chief analyst to the secretary of defense. Prior to her DoD positions, she served as president of the Center for Naval Analyses from 2005 to 2009, after working there as a research analyst and manager since 1981. Currently, she also serves on many governance and advisory boards including the Strategic Competitive Studies Project, Palantir Technologies, Muon Space, DEFCON AI, and Brown Advisory. Fox holds a bachelor’s and master’s degree in applied mathematics from George Mason University. She is a three-time recipient of the Department of Defense Distinguished Public Service Medal and of the Army’s Decoration for Distinguished Civilian Service. 

Whitney M. McNamara

Senior vice president, Beacon Global Strategies; nonresident senior fellow, author, Commission on Software-Defined Warfare, Atlantic Council

Whitney McNamara is a senior vice president at Beacon Global Strategies where she works with disruptive technology companies. She is also a co-author of both the Atlantic Council’s Commission on Defense Innovation Adoption and Commission on Software-Defined Warfare reports. Previously, McNamara worked in the Office of the Secretary of Defense for Research and Engineering, where she led the S&T portfolio of the Defense Innovation Board and as a technology policy subject matter expert at the DoD Chief Information Office. Prior, she was a senior analyst at the national security think tank Center for Strategic and Budgetary Assessments, where she worked at the intersection of future operation concepts and emerging technology adoption and advised the Department of Defense on technology acquisition strategies. 

Peter Modigliani

Senior advisor, Govini; author, Commission on Software-Defined Warfare, Atlantic Council

Peter Modigliani is a senior advisor at Govini, advising USD(A&S) and ASD(A) on strategic acquisition initiatives. Prior to that, he was a vice president at Beacon Global Strategies. Modigliani subsequently served as a defense acquisition leader within the MITRE Corporation, enabling the DoD and intelligence community to deliver innovative solutions with greater speed and agility. He works with acquisition and CIO executives, program managers, the Section 809 Panel, congressional staffs, industry, and academia to shape acquisition reforms, strategic initiatives, and major program strategies. Prior to MITRE, he was an assistant vice president with Alion Science and Technology. Modigliani began his career as an Air Force program manager for C4ISR programs. 

Tate Nurkin

Founder, OTH Intelligence Group; author, Commission on Software-Defined Warfare, Atlantic Council

Tate Nurkin is a nonresident senior fellow with the Atlantic Council’s Forward Defense and Indo-Pacific Security Initiative in the Scowcroft Center for Strategy and Security. He is also the founder of OTH Intelligence Group.

Before establishing OTH Intelligence Group in March 2018, Nurkin spent twelve years at Jane’s by IHS Markit where he served in a variety of roles, including managing Jane’s Defense, Risk, and Security Consulting practice. From 2013 until his departure, he served as the founding executive director of the Strategic Assessments and Futures Studies (SAFS) Center, which provided thought leadership and customized analysis on global competition in geopolitics, future military capabilities, and the global defence industry.

Substantively, Nurkin’s research and analysis has a particularly strong focus on US-China competition, defense technology, the future of military capabilities, and the global defense industry and its market issues. He also specializes in the design and delivery of alternative futures analysis exercises such as scenario planning, red teaming, and wargaming.

Nurkin is a frequent author and speaker on these overlapping research priorities. For example, he was the lead author of the US-China Economic and Security Review Commission’s report entitled China’s Advanced Weapons Systems, which was published in May 2018, and has provided testimony to the Commission on two occasions. In March 2019, he was featured on a Center for Strategic and International Studies China Power podcast on China’s unmanned systems. He was the lead author of the Atlantic Council’s 2019 strategy white paper on artificial intelligence.

He previously worked for Joint Management Services, the Strategic Assessment Center of SAIC, and the Modeling, Simulation, Wargaming, and Analysis team of Booz Allen Hamilton. From 2014 to 2018 he served consecutive two-year terms on the World Economic Forum’s Nuclear Security Global Agenda Council and its Future Council on International Security, which was established to diagnose and assess the security and defense implications of the Fourth Industrial Revolution.

Nurkin holds a MS in international affairs from the Sam Nunn School of International Affairs at Georgia Tech and a BA in history and political science from Duke University. He lives in Charlotte, NC.

Stephen Rodriguez

Managing partner, One Defense; senior advisor and study director of the Commission on Software-Defined Warfare, Forward Defense, Scowcroft Center for Strategy and Security, Atlantic Council

Stephen Rodriguez is a senior advisor with the Forward Defense program at the Atlantic Council’s Scowcroft Center for Strategy and Security and the managing partner of One Defense, a strategic advisory firm that leverages machine learning to identify advanced software and hardware commercial capabilities and accelerate their transition into the defense industrial base. He is also an investor at Refinery Ventures, an early-scale fund investing in dual-use technologies across the country.

Rodriguez began his career at Booz Allen Hamilton shortly before 9/11 supporting its national security practice. In his capacity as an expert on game theoretic applications, he supported the United States Intelligence Community, Department of Defense, and Department of Homeland Security as a lead architect for the Thor’s Hammer, Schriever II/III and Cyber Storm wargames. He subsequently was a vice president at an artificial intelligence company (Sentia Group) and served as chief marketing officer for an international defense corporation (NCL Holdings). Rodriguez serves as a board director or board advisor of ten venture-backed companies (Applied Intuition, Duco, Edgybees, Firestorm, Titaniam, Ursa Major Technologies, Vantage Robotics, WarOnTheRocks, ZeroMark, and Zignal Labs). He is a special advisor at America’s Frontier Fund, a commission director at the Atlantic Council and a life member at the Council on Foreign Relations. Rodriguez received his BBA degree from Texas A&M University and an MA degree from Georgetown University’s School of Foreign Service. He is published in Foreign Policy, WarOnTheRocks, National Review, and RealClearDefense. 

Clementine G. Starling-Daniels

Program director, senior resident fellow, Forward Defense, Scowcroft Center for Strategy and Security, Atlantic Council

Clementine G. Starling-Daniels is the director of the Atlantic Council’s Forward Defense program and a resident fellow within the Scowcroft Center for Strategy and Security. In her role, she shapes the Center’s US defense research agenda, leads Forward Defense’s team of nine staff and forty fellows, and produces thought leadership on US security strategies and the evolving character of warfare. Her research focuses on long-term US thinking on issues like China’s and Russia’s defense strategies, space security, defense industry, and emerging technology. Prior to launching Forward Defense, Starling served as deputy director of the Atlantic Council’s Transatlantic Security team, specializing in European security policy and NATO.

From 2016, she supported NATO’s Public Diplomacy Division at two NATO summits (Brussels and London) and organized and managed three senior Atlantic Council task forces on US force posture in Europe, military mobility, and US defense innovation adoption. During her time at the Atlantic Council, Starling has written numerous reports and commentary on US space strategy, deterrence, operational concepts, coalition warfare, and US-Europe relations. She regularly serves as a panelist and moderator at public conferences. Among the outlets that have featured her analysis and commentary are Defense One, Defense News, RealClearDefense, the National Interest, SpaceNews, NATO’s Joint Air and Space Power Conference, the BBC, National Public Radio, ABC News, and Government Matters, among others. Starling was named the 2022 Herbert Roback scholar by the US National Academy of Public Administration. She also served as the 2020 security and defense fellow at Young Professionals in Foreign Policy. Originally from the United Kingdom, Starling previously worked in the UK Parliament focusing on technology, defense, Middle East security, and Ukraine. She also supported the Britain Stronger in Europe campaign, championing for the United Kingdom to remain within the European Union. She graduated with honors from the London School of Economics with a BS in international relations and history and is an MA candidate in security studies at Georgetown University’s School of Foreign Service.

Mark J. Massa is a deputy director in the Forward Defense practice of the Scowcroft Center for Strategy and Security at the Atlantic Council. the Scowcroft Center for Strategy and Security at the Atlantic Council. A founding member of Forward Defense, Massa supports the director in the management of the program’s strategy, budget, personnel, and impact.

Massa leads Forward Defense’s portfolio of work on strategic forces issues, including nuclear strategy, space security, missile defense, and long-range conventional strike. His writing and commentary have appeared in the Hill, Defense News, RealClearDefense, Forbes, Air and Space Forces Magazine, the National Interest, CNBC, Sky News, and CTV News.

Massa earned his MA from Georgetown University’s security studies program. He received a BS in foreign service magna cum laude from Georgetown University with a degree in science, technology, and international affairs. He was awarded honors in his major for a senior thesis on a theory of nuclear ballistic missile submarine strategy.

Abigail Rudolph is a program assistant in the Forward Defense program of the Atlantic Council’s Scowcroft Center for Strategy and Security. She contributes to the program’s defense industry and innovation portfolio.

Previously, Rudolph interned with the Cleveland Council on World Affairs where she contributed to its foreign policy forums and committees on foreign relations. As an undergraduate, she co-authored an op-ed detailing net-zero carbon emissions pathways for Ohioans; conducted an independent study evaluating the environmental impacts of war; cofounded the Women in National Security Initiative at her university; and completed her senior thesis which focused on an assessment of, and recommendations for bolstering NATO’s China policy.

She graduated with honors from Baldwin Wallace University, earning a BA in national security with a minor in sustainability.

Curtis Lee is a program assistant in the Forward Defense program of the Atlantic Council’s Scowcroft Center for Strategy and Security.

Lee is a recent graduate from Carnegie Mellon University, where he received a MS in public policy and management, a BS in policy and management, and a BA in Chinese studies. He has experience working on numerous topics in defense and foreign policy with a focus on the Indo-Pacific region and China. Lee completed his senior thesis on analyzing the supply chain vulnerabilities of US future technologies as a result of US-China decoupling policies.

In addition to his role at the Atlantic Council, Lee is currently a military intelligence officer in the US Army Reserves.

Alexander S. Young is a project assistant in the Forward Defense program of the Atlantic Council’s Scowcroft Center for Strategy and Security, where he supports the program’s defense industry, innovation, and technology work.

Young is a graduate of the London School of Economics and Political Science, where he earned a MA with merit in global politics. He previously graduated with high honors from the University of California, Santa Barbara, completing a double major in political science and global studies with emphases in international relations and the Middle East and the North Africa region. Having studied and worked in both Europe and the Middle East, Young wrote his master’s dissertation about the impacts of Russia’s full-scale invasion of Ukraine on the geopolitics of the eastern Mediterranean and its natural gas projects.

Previously, Young also worked as an English teacher in underserved communities in Israel, having taught at An-Najah Comprehensive Junior High School in Rahat and Dizengoff Elementary School in Tel Aviv.

Young’s interests include geopolitics, ethnic and religious conflict, natural resources, defense industry issues, conflict resolution, and conflict stabilization.

Acknowledgements

This report was written and prepared with the support and input of its authors, commissioners on the Atlantic Council’s Commission on Software-Defined Warfare, and the Forward Defense program of the Atlantic Council’s Scowcroft Center for Strategy and Security.

This effort was conducted under the supervision of commission director Stephen Rodriguez, Forward Defense director Clementine Starling-Daniels, and Forward Defense deputy director Mark J. Massa. Special thanks to Atlantic Council CEO Fred Kempe and Matthew Kroenig for their support of this effort.

This effort has been made possible through the generous support of Booz Allen Hamilton, CAE, Helsing, Lockheed Martin, and Second Front Systems as the foundational sponsors, as well as sponsorship from Aalyria, Accrete AI, Adarga, Domino Data Lab, Edge Case Research, Fathom 5, Fortem Technologies, Kodiak Robotics, Latent AI, Peraton, Primer AI, SAAB, Saronic, Scale AI, and Skydio.

Foundational sponsors

Sponsors

To produce this report, the authors conducted more than fifty interviews and consultations with current and former officials in the US Department of Defense, congressional staff members, allied embassies in Washington, DC, and other academic and think tank organizations. However, the analysis and recommendations presented in this report are those of the authors alone and do not necessarily reflect the views of individuals consulted, commissioners, commission sponsors, the Atlantic Council, or any US government organization. Moreover, the authors, commissioners, and consulted experts participated in a personal, not institutional, capacity.

Explore the programs

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Atlantic Council Commission on Software-Defined Warfare: Final report appeared first on Atlantic Council.

]]>
Inside Defense reports on the Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/maher-inside-defense-reports-on-the-commission-on-software-defined-warfare/ Wed, 26 Mar 2025 20:30:00 +0000 https://www.atlanticcouncil.org/?p=836503 On March 26, Theresa Maher of Inside Defense published an article highlighting the key recommendations from Forward Defense’s Commission on Software-Defined Warfare report.

The post Inside Defense reports on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On March 26, Theresa Maher of Inside Defense published an article highlighting the key recommendations from the final report of Forward Defense’s Commission on Software-Defined Warfare. Entitled Think tankers urge DOD to keep software procurement simple,” the article underscores the Commission’s call for a commercial-first mindset, improved data collection and sharing, and stronger collaboration between the Department of Defense (DoD) and congressional appropriation staffers.

With China outproducing the United States in military hardware, software has become essential to maintaining a competitive edge. Maher highlights the “Davidson Window,” the prediction that China may take military action against Taiwan by 2027, underscoring the urgency behind the Commission’s near-term recommendations. The report outlines how the Pentagon can leverage software practices to enhance and strengthen US defense strategies.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Inside Defense reports on the Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Cartwright and Kandasamy in COTS Journal on the Commission on Software-Defined Warfare https://www.atlanticcouncil.org/insight-impact/in-the-news/cartwright-kandasamy-cots-journal-commission-on-software-defined-warfare-report/ Wed, 26 Mar 2025 20:00:00 +0000 https://www.atlanticcouncil.org/?p=837440 On March 26, COTS Journal published an article by Gen James “Hoss” Cartwright, USMC (ret.) and Jags Kandasamy, Commissioners on Forward Defense’s Commission on Software-Defined Warfare, highlighting key recommendations from the Commission’s final report.

The post Cartwright and Kandasamy in COTS Journal on the Commission on Software-Defined Warfare appeared first on Atlantic Council.

]]>

On March 26, COTS Journal published an article by Gen James “Hoss” Cartwright, USMC (ret.) and Jags Kandasamy, Commissioners on Forward Defense’s Commission on Software-Defined Warfare, highlighting key recommendations from the Commission’s final report. The piece explores enterprise software and operational software, outlining a strategic approach to their procurement and use. The authors urge the Department of Defense to adopt both software systems to enhance warfighter protection, ensure effective equipping, and improve battlefield safety.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Cartwright and Kandasamy in COTS Journal on the Commission on Software-Defined Warfare appeared first on Atlantic Council.

]]>
DefenseNews reports on Commission on Software-Defined Warfare final report https://www.atlanticcouncil.org/insight-impact/in-the-news/defensenews-reports-on-commission-on-software-defined-warfare-final-report/ Wed, 26 Mar 2025 20:00:00 +0000 https://www.atlanticcouncil.org/?p=836340 On March 26, Courtney Albon of DefenseNews published an article analyzing the defense industry’s response to Defense Secretary Pete Hegseth’s recent directive on software acquisition, highlighting Forward Defense's Commission on Software-Defined Warfare report as a key framework for understanding the broader reforms required.

The post DefenseNews reports on Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>

On March 26, Courtney Albon of DefenseNews published an article analyzing the defense industry’s response to Defense Secretary Pete Hegseth’s recent directive on software acquisition, highlighting Forward Defense‘s Commission on Software-Defined Warfare report as a key framework for understanding the broader reforms required. The piece, “In the wake of Hegseth’s software memo, experts eye further change,” details how military officials and industry executives have expressed “a mix of optimism and angst” about the mandate while calling for more comprehensive reforms.

The article underscores how the commission’s report identified workforce expertise as a critical need for the Pentagon and details its recommendation that Department of Defense (DoD) develop an “extensive, connected, layered and modular software-centric training program” to establish a foundational understanding of commercial best practices. The DefenseNews piece directly quotes from the commission’s findings, noting “While the DoD has taken steps to upskill its existing workforce for the digital age, a widely acknowledged software proficiency shortfall remains.”

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post DefenseNews reports on Commission on Software-Defined Warfare final report appeared first on Atlantic Council.

]]>
Exclusive on Atlantic Council Commission on Software-Defined Warfare final report published in Axios https://www.atlanticcouncil.org/insight-impact/in-the-news/exclusive-on-atlantic-council-commission-on-software-defined-warfare-final-report-published-in-axios/ Wed, 26 Mar 2025 14:00:00 +0000 https://www.atlanticcouncil.org/?p=836330 On March 26, Colin Demarest of Axios published an exclusive on the Pentagon's software-hardware balance and featured Forward Defense's Commision on Software-Defined Warfare report.

The post Exclusive on Atlantic Council Commission on Software-Defined Warfare final report published in Axios appeared first on Atlantic Council.

]]>

On March 26, Colin Demarest, future of defense reporter at Axios, published an exclusive article on the Pentagon’s software-hardware balance and featured Forward Defense‘s Commision on Software-Defined Warfare report. The article, “Exclusive: The Pentagon’s software-hardware tug of war,” highlights the commission’s conclusions on the era of “software-defined warfare” and the urgent need for the US military to enhance its software capabilities to compete with China.

The piece examines key findings from the Atlantic Council report, which was the product of eighteen months of work and over seventy interviews. According to the article, the commission concluded that the US military is still anchored to an acquisition system “ill-suited to the rapid tempo of modern technological innovation,” putting the country “at significant risk.” The report emphasizes the Department of Defense’s lack of “sufficient software expertise” and recommends establishing a software cadre by recruiting dozens of specialists to be deployed across various defense departments.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Exclusive on Atlantic Council Commission on Software-Defined Warfare final report published in Axios appeared first on Atlantic Council.

]]>
Rodriguez, Shanahan, and Sweatt cut into the stakes and opportunities of software-defined warfare on All Quiet on the Second Front podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/rodriguez-shanahan-sweatt-software-defined-warfare/ Mon, 24 Mar 2025 18:00:00 +0000 https://www.atlanticcouncil.org/?p=835834 On March 24, Stephen Rodriguez, senior advisor at Forward Defense and director of FD's Commission on Software-Defined Warfare was a featured guest alongside Lt Gen Jack Shanahan on the podcast All Quiet on the Second Front, hosted by Tyler Sweatt.

The post Rodriguez, Shanahan, and Sweatt cut into the stakes and opportunities of software-defined warfare on All Quiet on the Second Front podcast appeared first on Atlantic Council.

]]>

On March 24, Stephen Rodriguez, senior advisor at Forward Defense and director of FD’s Commission on Software-Defined Warfare, was a featured guest alongside Lt Gen Jack Shanahan, a commissioner on the Commission on Software-Defined Warfare, on the podcast All Quiet on the Second Front, hosted by Tyler Sweatt, a commissioner on the Commission on Software-Defined Warfare. This episode, entitled “Software Defined Warfare with Lt. Gen. Jack Shanahan and Stephen Rodriguez,” shed light on the urgency of developing innovative strategies that will best prepare the DoD to navigate an increasingly software-driven defense landscape.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Rodriguez, Shanahan, and Sweatt cut into the stakes and opportunities of software-defined warfare on All Quiet on the Second Front podcast appeared first on Atlantic Council.

]]>
To win the AI race, the US needs an all-of-the-above energy strategy https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/to-win-the-ai-race-the-us-needs-an-all-of-the-above-energy-strategy/ Fri, 21 Mar 2025 15:11:58 +0000 https://www.atlanticcouncil.org/?p=833987 To ensure US AI leadership, the United States must harness all forms of energy, allow a level playing field, and remove red tape constraining the buildout of critical enablers, especially transmission lines and grid enhancing technologies.

The post To win the AI race, the US needs an all-of-the-above energy strategy appeared first on Atlantic Council.

]]>
The United States faces a “Sputnik moment.” Chinese firm DeepSeek claims its artificial intelligence (AI) model has achieved near-parity with US models in terms of functionality—at lower cost and energy use. While many AI analysts are skeptical of some portions of DeepSeek’s claims, particularly surrounding cost nuances, or even its ability to lower energy consumption, virtually all acknowledge that DeepSeek has made a serious technical achievement. DeepSeek’s technical breakthrough will intensify the US-China AI race, with significant economic and military stakes. While acknowledging uncertain AI-related energy demand, the United States must build substantial amounts of new electricity generation and transmission to win the AI competition with China.

To ensure US AI leadership, the United States must harness all forms of energy–while also promoting energy efficiency—allow a level playing field, and remove red tape constraining the buildout of critical enablers, especially transmission lines and grid enhancing technologies. A “some of the above” energy approach could force the United States to compromise on not only AI leadership, but also affordable electricity and other economic priorities.

The competition with China in artificial intelligence may be the defining national security challenge of our time. While AI’s exact electricity needs remain uncertain, substantial power infrastructure expansion and efficiency improvements are needed. By building new generation capacity, including advanced energy technologies, enhancing transmission, and optimizing power consumption, the United States can maintain its competitive edge in AI development. If the United States adopts a “some of the above” approach to energy, however, it will be waging the century’s most important technological fight with China with one hand tied behind its back.

About the author

Related content

Stay connected

Keep up with the latest from the Global Energy Center!

Sign up below for program highlights, event invites, and analysis on the most pressing energy issues.

Explore the program

The Global Energy Center develops and promotes pragmatic and nonpartisan policy solutions designed to advance global energy security, enhance economic opportunity, and accelerate pathways to net-zero emissions.

The post To win the AI race, the US needs an all-of-the-above energy strategy appeared first on Atlantic Council.

]]>
India’s path to AI autonomy https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/indias-path-to-ai-autonomy/ Thu, 13 Mar 2025 13:00:00 +0000 https://www.atlanticcouncil.org/?p=830704 India is taking a distinctive approach to the global race for artificial intelligence (AI) supremacy.

The post India’s path to AI autonomy appeared first on Atlantic Council.

]]>
India’s unique approach to AI autonomy: A three-pillar strategy

India is taking a distinctive approach to the global race for artificial intelligence (AI) supremacy. While the United States and China focus on AI for economic dominance and national security, India’s vision revolves around AI autonomy through the development of homegrown AI solutions that are closely linked to its development goals.1 This approach seeks to position India as a prominent global AI leader through a three-pillar strategy that distinguishes it from other major nations. India’s vision of AI autonomy is based on:

  • Democratizing AI through open innovation: Leading the development of open-source models and platforms that make AI more accessible and adaptable to India’s local needs including the Bhashini platform, which incorporates Indian languages in large language model processing, and the iGOT Karmayogi online learning platform for government training.
  • Public-sector-led development applications: Implementing AI solutions to address critical development challenges through government-led initiatives in healthcare, agriculture, and education, ensuring that technology meets societal needs.
  • Global leadership in AI for sustainable development: Championing the integration of AI to achieve the Sustainable Development Goals2 (SDGs) on a global scale while pushing ethical AI governance and South-South collaboration.

This strategy seeks to establish India as a global AI leader while addressing pressing social issues, closing economic gaps, and improving the quality of life for its diverse population of over 1.3 billion people.

India’s journey toward AI autonomy goes beyond technological independence; it creates a narrative in which innovative AI technologies drive inclusive growth. This philosophy is reflected in the “India AI” mission and its National Strategy for AI, which positions India as both an adopter and developer of AI technologies and a global hub for ethical and development-oriented AI innovation.3

India’s AI landscape: A vision of innovation and strategy

The Indian AI ecosystem is a dynamic landscape shaped by government initiatives, private-sector innovation, and academic research. There has been a notable increase in AI-focused start-ups in recent years, with the National Association of Software and Service Companies (NASSCOM) reporting over 1,600 in 2023.4 This growing sector highlights India’s technological capabilities and entrepreneurial spirit in tackling local challenges.

Several strategic government initiatives at the core of this ecosystem have paved the way for India’s advancements in AI. The India AI mission, launched in 2023, is a government initiative to build a comprehensive ecosystem to foster AI innovation across various sectors in India. Spearheaded by the Ministry of Electronics and Information Technology (MeitY), it focuses on developing AI applications to address societal challenges in healthcare, education, agriculture, and smart cities while promoting responsible and ethical AI development.5

IndiaAI reflects the country’s ambitions to become a global AI powerhouse and is supported by the National AI Strategy, created by the National Institution for Transforming India (NITI Aayog). The strategy provides a comprehensive road map for AI adoption in those sectors targeted by IndiaAI.6

What sets these initiatives apart from AI strategies in other countries is their emphasis on using AI for social good. For instance, the government of India organized the Responsible AI for Social Empowerment (RAISE) initiative in 2020,7 preceding the current AI hype. This demonstrates India’s commitment to ethical AI development, and such initiatives align with India’s National Development Agenda 2030, positioning AI as a driver of economic growth and a crucial enabler for achieving SDGs.8

Democratizing AI through open innovation

India is making significant progress in promoting open-source AI development, fostering inclusivity and collaboration within the global AI community. Open-source frameworks, driven by collaborative innovation, offer transparency, interoperability, and scalability—essential qualities for a diverse country like India.

The Bhashini initiative, led by the MeitY, exemplifies this commitment by leveraging open-source frameworks to build natural language processing (NLP) models that support twenty-two official Indian languages and numerous dialects. This project goes beyond basic language processing; it signifies India’s dedication to AI democratization by making these models and datasets freely available to developers and start-ups.9

The iGOT Karmayogi platform showcases the scalability of open-source AI solutions to improve digital literacy in government. Designed to upskill twenty million employees, it utilizes open-source AI tools to provide personalized learning pathways, reducing costs while ensuring continuous improvement based on user feedback.10

Leading academic institutions like the Indian Institute of Science (IISc) in Bangalore and the Indian Institute of Technology (IIT) Madras actively contribute to open-source AI research through global public platforms such as TensorFlow and Hugging Face.11 Their work encompasses various fields, including computer vision for healthcare, autonomous vehicles, and environmental monitoring.12

For Indian start-ups, using open-source AI models or systems provides significant benefits such as lower costs, greater customization flexibility, access to a larger developer community, the ability to tailor models to specific Indian languages and dialects, and enhanced data security by allowing deployment on premises, making it ideal for building localized AI solutions while maintaining control over sensitive data.

In August 2024, Meta’s open-source Llama model reached a significant milestone of 350 million cumulative downloads since the release of Llama 1 in early 2023.13. India has emerged as one of the top three markets globally for this model.14 Several Indian start-ups and well-known consumer apps, including Flipkart, Meesho, Redbus, Dream11, and Infoedge, have announced their integration of Llama into their applications.

Additionally, IBM Elxsi, a partnership between IBM and Tata Elxsi, India’s largest technology company, focuses on designing local digital engineering solutions, like using open-source models to develop AI-powered edge network solutions for rural and remote areas, enhancing AI accessibility while reducing latency and energy consumption.15

The democratization of AI through open-source initiatives is critical to India’s development trajectory and technological autonomy. This approach enables rapid, cost-effective AI adoption across India’s diverse sectors and regions, with tangible results already visible: The development cycle for AI solutions has been reduced from years to months, as evidenced by the rapid deployment of language models via the Bhashini initiative, which now serves millions in their native languages. Unlike the West, where most AI development is primarily driven by private companies with proprietary technologies, India’s adoption of an open-source-first approach has led to an independent ecosystem in which government initiatives, academic institutions, and private-sector innovations coexist.

The public-sector role in developing applications to address India’s unique challenges

The profound socioeconomic challenges that India’s 1.3 billion people face have fundamentally shaped its approach to AI.

India faces critical healthcare challenges: 70 percent of healthcare infrastructure is concentrated in urban areas serving only 30 percent of the population, and the doctor-patient ratio stands at 1:1,511, far below the World Health Organization’s recommended 1:1,000.16

The education sector struggles with fundamental gaps, as 250 million Indians lack basic literacy skills, with only 27 percent having access to internet-enabled devices for online learning, further complicated by the country’s diversity of twenty-two official languages and 1,600 dialects.17

Agricultural challenges are particularly acute, with the sector employing 42 percent of the workforce but contributing only 18 percent to gross domestic product; 86 percent are small and marginal farmers with less than two hectares of land, and 40 percent of food production is lost due to inefficient supply chains.18

Financial inclusion remains a significant barrier to development, with 190 million unbanked adults, 70 percent of rural transactions being cash-based, and a stark digital gender divide where only 33 percent of women have mobile internet access compared to 67 percent of men.19

The size and complexity of these challenges necessitate innovative technological solutions that are both scalable and relevant to India’s specific issues.

India’s development-focused AI vision strategically responds to these pressing issues. Rather than viewing AI solely as a tool for economic competition or technological advancement, India has positioned it as a transformative tool for closing fundamental development gaps.

AI is closing critical gaps in access and the quality of care in healthcare. Through the government’s eSanjeevani platform—India’s national telemedicine service offering patients access to medical specialists and doctors remotely via smartphones—has been revolutionary, with over one hundred million teleconsultations in 2023 and the aim of closing the urban-rural healthcare gap.20 The platform developed AI/machine learning models to improve data collection, quality of care, and doctor-patient consultations on eSanjeevani.21 The Indian Council of Medical Research’s collaborations with AI start-ups for disease prediction models in tuberculosis and diabetes paved the way for preventive healthcare interventions.

The agricultural sector is experiencing an AI revolution, driven by government initiatives, particularly through two key platforms. The mKisan portal gives more than fifty million farmers personalized SMS access to critical agricultural information, while the Agristack initiative lays the groundwork for precision agriculture by providing AI-powered advisory services for crop planning, pest control, and weather forecasting. The Indian Meteorological Department’s use of AI has improved monsoon forecast accuracy by 20 percent, significantly impacting agricultural planning.22

In education, state governments have contracted with Embibe, a company that offers AI-powered learning for basic education to bridge the learning gaps and expand access to quality education. It identifies gaps in knowledge and creates content that addresses them by studying data from student interactions. FutureSkills Prime, an initiative of the National Association of Software and Service Companies, provides AI skills training—and has developed a large AI talent pool, with more than 2.5 million technology professionals trained in AI in India. According to the 2025 Global Workplace Skills Study by Emeritus, 96% of Indian professionals are using AI and generative AI tools at work, significantly higher than the 81% in the US and 84% in the UK. This workforce advantage has made India a preferred destination for global companies seeking skilled AI professionals.

India’s public-sector-led approach to AI development uniquely integrates technology with development priorities. The government’s strategic leadership in deploying AI solutions in healthcare, agriculture, education, and finance demonstrates a unique model in which technology acts as a force multiplier for development efforts. Unlike many developed countries, where private-sector innovation drives AI advancement, India’s government-led initiatives ensure that AI solutions address fundamental development challenges on a large scale.

By combining scale, accessibility, and local relevance, India’s public-sector leadership in AI deployment is a unique model for other developing countries facing similar challenges, demonstrating that technology can effectively accelerate inclusive development when guided by clear public-policy goals.

Shaping tomorrow: India’s position in global AI leadership and development

India’s global AI leadership position is uniquely shaped by proactive government policies and collaborative initiatives. As a founding member of the Global Partnership on Artificial Intelligence (GPAI), India has used its presidency in 2024 to advance key priorities such as democratizing access to AI skills, addressing societal inequities, promoting responsible AI development, and applying AI in critical sectors such as agriculture and education.

AI plays a critical role in India’s vision to achieve all seventeen SDGs by 2030. India also aims to be one of the top three countries in AI research, innovation, and application by 2030, which reflects a larger ambition: to create a more equitable and sustainable global AI landscape. This unique approach balances technological autonomy and inclusive development, by aligning AI initiatives with socioeconomic priorities to address India’s unique challenges.

However, the expansion of India’s AI ecosystem faces several critical challenges, including the demand for an advanced AI compute infrastructure, developing accessible AI tools, ensuring data privacy, and mitigating algorithmic bias at scale. Addressing these issues requires multistakeholder collaboration between the government, local industry leaders, and academic institutions.

As India progresses in its AI journey, its experience provides valuable insights into how to use AI to drive socioeconomic development. The country’s development-focused approach to AI adoption and governance may serve as a model for other developing countries looking to capitalize on AI’s potential for inclusive growth.

About the authors

Mohamed “Mo” Elbashir is a nonresident senior fellow at the Atlantic Council’s GeoTech Center, as well as Meta Platforms’ global infrastructure risk and enablement manager. With over two decades of experience, he specializes in global technology governance, regulatory frameworks, public policy, and program management.

Kishore Balaji Desikachari is the executive director for government affairs at IBM India/South Asia. With over thirty years of leadership experience at Microsoft, Intel, and Hughes, he is a recognized regional policy commentator on AI, quantum computing, semiconductors, trade, and workforce strategies.

Related content

Explore the program

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

1    World Economic Forum, “Sovereign AI: What It Is, and 6 Ways States Are Building It,” September 10, 2024, https://www.weforum.org/stories/2024/04/sovereign-ai-what-is-ways-states-building/.
2    United Nations member states adopted these SDGs in 2015. See “The 17 Goals,” United Nations Department of Economic and Social Affairs, accessed February 20, 2025, https://sdgs.un.org/goals
3    The National AI Portal of India,” INDIAai, n.d., https://indiaai.gov.in/.
4    “Indian AI Ecosystem: State of the Industry Report,” National Association of Software and Service Companies, December 2023, https://nasscom.in/knowledge-center/publications/weathering-challenges-indian-tech-start-landscape-report-2023.
5    Ministry of Electronics and Information Technology, “India AI Mission: Vision and Implementation Strategy,” Government of India, 2023, https://www.meity.gov.in/indiaai.
6    “National Strategy for Artificial Intelligence: Updated Framework,” National Institution for Transforming India (aka NITI Aayog), Government of India, 2021, https://niti.gov.in/national-strategy-artificial-intelligence.
7    “Raise.” 2020. IndiaAI. 2020. https://indiaai.gov.in/raise.
8    “An Overview of SDGs,” NITI Aayog, n.d., https://www.niti.gov.in/overview-sustainable-development-goals.
9    “Bhashini,” Government of India, 2024, https://bhashini.gov.in/.
10    Department of Personnel and Training, “iGOT Karmayogi: Transforming Capacity Building in Government,” Government of India, 2023, https://igot.gov.in/.
11    “Indian Institute of Science,” n.d., https://iisc.ac.in/; and “Indian Institute of Technology Madras, Tamilnadu,” 2019, https://www.iitm.ac.in/.
12    Bharat, “Top 7 Computer Vision Research Institutes in India,” OpenCV, January 3, 2024, https://opencv.org/blog/computer-vision-research-in-india/.
13    “With 10x Growth Since 2023, Llama Is the Leading Engine of AI Innovation,” Meta, 2024, https://ai.meta.com/blog/llama-usage-doubled-may-through-july-2024/
14    Supreeth Koundinya, “How I Met Your Llama,” Analytics India Magazine, October 26, 2024, https://analyticsindiamag.com/ai-origins-evolution/how-i-met-your-llama/.
15    “Role of Edge AI in Enhancing Real-Time Data Processing,” Hindustan Times (as shown on Tata Elxsi website), December 12, 2024, https://www.tataelxsi.com/news-and-events/role-of-edge-ai-in-enhancing-real-time-data-processing.
16    “Health and Family Welfare Statistics in India 2023,” Ministry of Health and Family Welfare, Government of India, 2023, https://mohfw.gov.in/?q=publications-11; and Sakthivel Selvara et al., India Health System Review, World Health Organization Regional Office for South-East, Health Systems in Transition 11, no. 1 (2022), https://iris.who.int/handle/10665/352685.
17    ASER Centre, “Annual Status of Education (Rural) Report 2023,” Pratham Education Foundation, January 2024, https://asercentre.org/wp-content/uploads/2022/12/ASER-2023-Report-1.pdf
18    “Analytical Reports,” PRS Legislative Research, n.d., https://prsindia.org/policy/analytical-reports/state-agriculture-india.
19    Reserve Bank of India–Annual Report,” 2024, Rbi.org.in, https://www.rbi.org.in/Scripts/AnnualReportPublications.aspx?Id=1404.
20    “Esanjeevani,” n.d., https://esanjeevani.mohfw.gov.in/.
21    “Clinical Decision Support System (CDSS) for Esanjeevani,” MIT SOLVE, 2022, https://solve.mit.edu/challenges/heath-in-fragile-contexts-challenge/solutions/75300.
22    “AI Helps Improve Predictability of Indian Summer Monsoons,” Department of Science & Technology,” 2023, https://dst.gov.in/ai-helps-improve-predictability-indian-summer-monsoons.

The post India’s path to AI autonomy appeared first on Atlantic Council.

]]>
Emerging technology policies and democracy in Africa: South Africa, Kenya, Nigeria, Ghana, and Zambia in focus https://www.atlanticcouncil.org/in-depth-research-reports/report/emerging-technology-policies-and-democracy-in-africa-south-africa-kenya-nigeria-ghana-and-zambia-in-focus/ Mon, 10 Mar 2025 12:30:00 +0000 https://www.atlanticcouncil.org/?p=830835 How are African nations navigating the governance of AI, digital infrastructure, and emerging technologies? Emerging Technology Policies and Democracy in Africa: South Africa, Kenya, Nigeria, Ghana, and Zambia in Focus examines how five key countries are shaping regulatory frameworks to drive innovation, protect digital rights, and bridge policy gaps in an evolving tech landscape.

The post Emerging technology policies and democracy in Africa: South Africa, Kenya, Nigeria, Ghana, and Zambia in focus appeared first on Atlantic Council.

]]>

Executive summary

Africa is increasingly asserting its participation in the advancement of emerging technologies by engaging in active dialogues and devising roadmaps for the development, deployment, and regulation of these technologies. However, strategies to employ emerging technologies vary widely both in levels of progress as well as regulatory mechanisms. This report explores how five African countries—South Africa, Kenya, Nigeria, Ghana, and Zambia—are strategically navigating the governance of new technologies to enrich their citizens’ lives while mitigating potential risks. It focuses on three key emerging technology domains, namely: connectivity, digital public infrastructure, and artificial intelligence (AI).

Beginning with an analysis of the foundational digital technology policies around data protection and governance and cybersecurity, the country reviews highlight the current landscape of laws, and strategies governing each of the emerging technologies of interest. By exploring the strengths and weaknesses of each country’s policy landscape across these technology domains, the report offers insights into prospects and challenges in harnessing emerging technologies for societal good.

The report finds that governments are generally optimistic about the potential impact of emerging technologies on economic development in their respective countries. This is reflected in the large public investment in technology infrastructure, promotion of innovative ecosystems, and the integration of information and communication technologies (ICTs) into e-governance and e-services toward a holistic digitalized economy and society. The countries’ multistakeholder approaches highlight the need for responsible governance while promoting active private-sector engagement for the public good.

Nigeria, South Africa, Kenya, and Ghana were found to have comparatively robust policies for each emerging technology examined, or at least—as is the case with Kenya—documentation or drafts in the form of gazettes and public consultation documents. Government efforts are more prominent in the AI domain, given the increased attention it has garnered lately. However, these frameworks are hampered by limited implementation capacities, poor infrastructure, policy fragmentation and overlap, low digital literacy levels, and a growing digital divide. Zambia on the other hand, while having strong aspirations to become an ICT-enabled knowledge economy, lacks dedicated policies pertaining to emerging technologies. Although the country’s data-protection laws, intellectual property, cyber security, and consumer protection provide a foundational framework, more updated regulations are required to keep pace with the speed at which emerging technologies are playing an increasingly pivotal role in citizens’ daily lives.

A SWOT (i.e., strengths, weaknesses, opportunities, and threats) analysis of the broader digital-technologies sector across these countries reveals some universal themes. Strengthwise, governments are generally proactive and enthusiastic about engaging new technology issues, and ICT authorities tend to adapt quickly to new developments by publishing subsidiary laws, releasing draft statements, or convening multistakeholder workshops, where national policy frameworks are absent. An overarching rather than specific sectoral or technology-domain approach also drives national technology pursuits, where for example, all the five countries examined have a national ICT/digital economy strategy which predates and already makes foundational provisions for emerging technology policies. Policy-formulation processes were driven by stakeholder engagement and public consultations, as seen in regular calls for contributions and multistakeholder convenings leading up to policy enactment. Yet huge disparities were observed within countries, where rural and marginalized urban communities, as well as women, are left behind by governmental technology ambitions. This calls for updated policy frameworks and strategies that emphasize inclusion and other sociopolitical considerations to avoid deepening inequities.

For Africa to leverage emerging technologies for socioeconomic development while maintaining accountable and transparent systems, legislative frameworks must be streamlined alongside strong institutional integration to ensure effective enforcement. It is imperative that policymakers develop a strong understanding of emerging technologies to enhance their capacities for developing comprehensive policies to address them. Equally important is raising public awareness to protect the African people’s digital rights and foster safe digital environments.

About the authors

Ayantola Alayande is a Researcher at the Global Center on AI Governance. There, Ayantola works on the African Union Continental AI Strategy and the African Observatory on Responsible AI. He is also a researcher at the Bennett Institute for Public Policy at the University of Cambridge, where he focuses on industrial policy and the future of work in the public sector.

Samuel Segun, PhD is a Senior Researcher at the Global Center on AI Governance. He is also an AI Innovation & Technology consultant for the United Nations Interregional Crime and Justice Research Institute (UNICRI), where he works on the project ‘Toolkit for Responsible AI Innovation in Law Enforcement’.

Leah Junck, PhD is a Senior Researcher at the Global Center on AI Governance. Her work explores human-technology experiences. She is the author of Cultivating Suspicion: An Ethnography and Like a Bridge Over Trouble: An Ethnography on Strategies of Bodily Navigation of Male Refugees in Cape Town.

Explore the program

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) has operationalized the study of disinformation by exposing falsehoods and fake news, documenting human rights abuses, and building digital resilience worldwide.

The post Emerging technology policies and democracy in Africa: South Africa, Kenya, Nigeria, Ghana, and Zambia in focus appeared first on Atlantic Council.

]]>
Grundman on Investor’s Business Daily on technological innovation in the defense sector https://www.atlanticcouncil.org/insight-impact/in-the-news/grundman-on-investors-business-dailey-on-software/ Wed, 05 Mar 2025 20:00:00 +0000 https://www.atlanticcouncil.org/?p=831050 On March 5, Steven Grundman, senior fellow at Forward Defense, was featured on Investor’s Business Daily in a segment of their Growth Stories.

The post Grundman on Investor’s Business Daily on technological innovation in the defense sector appeared first on Atlantic Council.

]]>

On March 5, Steven Grundman, senior fellow at Forward Defense, was featured on Investor’s Business Daily in a segment of their Growth Stories, “Palantir Is Shaking Up The Defense Sector. What Comes Next As The AI Revolution Heads To The Front Lines?” Grundman discusses how software is emerging as a key differentiator in military programs.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Grundman on Investor’s Business Daily on technological innovation in the defense sector appeared first on Atlantic Council.

]]>
Meyer interviewed in DW on the Trumps administration’s priorities for technology regulation and competition https://www.atlanticcouncil.org/insight-impact/in-the-news/meyer-interviewed-in-dw-on-the-trumps-administrations-priorities-for-technology-regulation-and-competition/ Mon, 03 Mar 2025 14:34:32 +0000 https://www.atlanticcouncil.org/?p=829684 On February 27, Joel Meyer, nonresident senior fellow in the Scowcroft Center’s GeoStrategy Initiative, was interviewed by DW after the Artificial Intelligence (AI) Action Summit in Paris. He argues that US Vice President JD Vance’s speech at the summit serves as a “wake-up-call” for European regulators to foster an “AI ecosystem that will allow Europe […]

The post Meyer interviewed in DW on the Trumps administration’s priorities for technology regulation and competition appeared first on Atlantic Council.

]]>

On February 27, Joel Meyer, nonresident senior fellow in the Scowcroft Center’s GeoStrategy Initiative, was interviewed by DW after the Artificial Intelligence (AI) Action Summit in Paris. He argues that US Vice President JD Vance’s speech at the summit serves as a “wake-up-call” for European regulators to foster an “AI ecosystem that will allow Europe to catch up” in technological innovation. He notes opportunities for mutual benefit if the United States and its allies partner in AI development.

I think there is still room for a collaborative approach. Because if it is ‘only the US’ or ‘only Europe,’ we will not be able to compete with the scale that China, its economy, and [its] data bring to the AI race.

Joel Meyer

The post Meyer interviewed in DW on the Trumps administration’s priorities for technology regulation and competition appeared first on Atlantic Council.

]]>
WIn Fellowship mentioned in Arab News on its event discussing AI’s role in driving economic growth in Saudi Arabia https://www.atlanticcouncil.org/insight-impact/in-the-news/win-fellowship-mentioned-in-arab-news-on-its-event-discussing-ais-role-in-driving-economic-growth-in-saudi-arabia/ Tue, 25 Feb 2025 18:13:45 +0000 https://www.atlanticcouncil.org/?p=828401 The post WIn Fellowship mentioned in Arab News on its event discussing AI’s role in driving economic growth in Saudi Arabia appeared first on Atlantic Council.

]]>

The post WIn Fellowship mentioned in Arab News on its event discussing AI’s role in driving economic growth in Saudi Arabia appeared first on Atlantic Council.

]]>
China’s Year of the Snake is off to a good start, thanks in part to Trump https://www.atlanticcouncil.org/content-series/inflection-points/chinas-year-of-the-snake-is-off-to-a-good-start-thanks-in-part-to-trump/ Tue, 25 Feb 2025 12:00:00 +0000 https://www.atlanticcouncil.org/?p=828281 From an AI breakthrough to an apparent diplomatic recalibration by Washington, Beijing seems to be going from strength to strength in the new year.

The post China’s Year of the Snake is off to a good start, thanks in part to Trump appeared first on Atlantic Council.

]]>
China has had a remarkably good month.

While US President Donald Trump’s first weeks in office have his allies reeling and Americans uncertain as they sort out his torrent of executive orders, Beijing is orchestrating a masterclass of reinvention and resolve.

It all began in late January with artificial intelligence (AI) startup DeepSeek’s surprising debut, which jolted US stock markets. That was followed this past week by Chinese President Xi Jinping’s public mending of fences with his country’s sidelined business elites, and an ongoing surge in Chinese capital-market prices, driven by tech stocks. And that has been accompanied by a surprisingly cordial beginning with the new Trump administration despite Beijing’s unsettling military assertiveness.

To be sure, none of China’s underlying problems have vanished. Its economy is growing too slowly, and its debt issues continue to cast a cloud over the property sector. Few serious experts think that the Chinese economy will reach its 5 percent growth target this year. In addition, Beijing’s demographic problems are a generational challenge. And Xi’s insistence on strict Chinese Communist Party control remains a disincentive for investment.

At this moment, two stories are being told about China, RAND researcher Gerard DiPippo pointed out in a recent analysis. China is racing ahead as an economic and technological powerhouse, and China’s economy is slowing under the weight of its mounting problems. “Although these narratives appear contradictory, both are true,” DiPippo argues.

Seek the limelight

No development marked a more powerful shift in the global mood toward China than the release and immediate success of DeepSeek’s reasoning model. Once shrouded in mystery, the breakthrough is now the symbol of China’s potential to rival the United States at less cost and despite export controls on the most advanced US microchips. 

In areas where many have assumed that US companies are in the lead—AI, data analytics, quantum computing—Beijing has declared “game on.” Countries that have been betting tens of billions of dollars on the United States’ technological edge are now left wondering just how quickly China will be able to close any technological gap.  

If DeepSeek caught investors off guard about Chinese capability to compete on AI, Xi surprised them again on February 17 with a high-profile, deeply choreographed meeting with Chinese business leaders. It was a shift by Xi, who had sidelined some of these leaders in recent years as he consolidated power, sensing that their growing success might be a threat to party and state control.

The most unexpected attendee at Xi’s meeting was Alibaba co-founder Jack Ma, who had fallen afoul of the party after he publicly complained about overregulation in October 2020. It remains a safe bet that Xi doesn’t intend to cede state control to the private sector, but his urgent need for economic growth means that he must provide it more leash. At the same time, he is sending a message to markets.  

Take the lead

Global investors have responded with one of the biggest market surprises of 2025: the comeback of the Chinese tech sector, which many global investors had abandoned in recent years due to Xi’s regulatory crackdown.

The Hang Seng Tech Index, which tracks Chinese stocks traded in Hong Kong, surged 6.5 percent alone this past Friday. Shares of Alibaba, now with more official blessing, rose 15 percent that same day after robust sales growth. Since the beginning of the year, Chinese stocks have outperformed many of their US counterparts. 

Many global investors are now willing to place bets on Beijing’s new direction, even as they begin to hedge on uncertainties related to the Trump administration’s actions and potential US inflation.

Washington’s recalibration

Trump himself is fueling this change of mood regarding China. Having threatened tariffs as high as 60 percent against China during his presidential campaign, his softening of tone as president has soothed Chinese nerves. Trump’s gestures have included an invitation to Xi to attend his inauguration, an executive order that has brought a reprieve to the banning of TikTok, and an imposition of a relatively modest 10 percent tariff on China that Chinese leaders seem to have received with more relief than disdain.

If relations between China and the United States in recent months seemed to be a powder keg ready to ignite, then Trump appears to have pulled the fuse. He has done this through his willingness to engage with Beijing and his apparent lack of concern for Xi’s gathering autocratic challenge to US global leadership, including China increasingly acting in concert with Russia, Iran, and North Korea.

Trump’s dramatic recalibration this past week regarding Russian President Vladimir Putin further boosted Xi—someone whom Trump, only days earlier in Davos, had blamed for complicity in Russia’s war against Ukraine. While the Biden administration often warned that losing Ukraine would only encourage China in its aspirations to gain control of Taiwan, the Trump administration appears less convinced of the connection.  

China also rightly senses a potential opening among Washington’s European allies and even with Ukraine. Despite China’s support for Putin’s war, Ukrainian President Volodymyr Zelenskyy has been careful not to close the door to engaging with Beijing. China has even signaled its willingness to provide troops for a peacekeeping role in Ukraine, while Trump has ruled out the use of US soldiers for such purposes.

Beijing’s maneuvers

China may sense another opening, as well. Beijing’s military moves in the past month underscore that Xi sees little downside to greater military assertiveness in the first days of the new Trump administration.

Last week, New Zealand’s government said that the Chinese navy held live-fire drills in international waters off its coast. This came just a day after Chinese vessels staged a similar drill off Australia’s southwestern coast that forced some commercial airlines to divert their flights.

During the recent Lunar New Year celebrations, China’s People’s Liberation Army increased military maneuvers around Taiwan. And on February 18, a Chinese navy helicopter flew within ten feet of a Philippine patrol plane in an effort to force it out of disputed skies.

“You are flying too close, you are very dangerous,” the Philippine pilot warned by radio.

It adds up to a remarkable start of the year for China. The emergence of DeepSeek, Xi’s olive branch to Ma and others, an ongoing market rally led by tech stocks, Trump’s conciliatory approach amid Beijing’s muscular military posturing—all contribute to increased Chinese confidence in 2025, which is the Year of the Snake, symbolizing transformation and the shedding of negativity.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on X: @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points newsletter, a column of dispatches from a world in transition. To receive this newsletter throughout the week, sign up here.

The post China’s Year of the Snake is off to a good start, thanks in part to Trump appeared first on Atlantic Council.

]]>
What’s missing from the AI debate? Patience. https://www.atlanticcouncil.org/blogs/new-atlanticist/whats-missing-from-the-ai-debate-patience/ Tue, 18 Feb 2025 18:44:14 +0000 https://www.atlanticcouncil.org/?p=826480 The AI sector is evolving quickly, fueled by a self-reinforcing cycle of investment, commentary, and ambition. In this race for compute, patience is important to sorting out sustainable innovation from speculative excess.

The post What’s missing from the AI debate? Patience. appeared first on Atlantic Council.

]]>
Artificial intelligence (AI) is evolving quickly, but the forces driving its development—computing infrastructure, model design, and the economics of deployment—are far from settled. There are no magic beans, no single indicators. Rather, there are a handful of strong signals that interact with each other such that interpreting one in isolation can easily lead to mistaken predictions on where AI is headed. Assuming that more compute power inevitably produces better models, for example, ignores that different companies and lines of research are taking different paths to solve the “systems problem” of AI. 

Unfortunately, too many institutions—investors chasing returns, policymakers rushing to position themselves, and media outlets eager to shape the narrative—currently mistake motion for progress. In dollar terms, around 50 percent of all new venture capital investments went into AI service and related companies in 2024, including more than 60 percent of all activity in the fourth quarter and “six of the top ten deals.” 

Balancing on a bubble

There is an AI bubble, and it is not just financial; it is also intellectual and political, fueled by a self-reinforcing churn of investment, commentary, and ambition. Discussions of AI leadership increasingly rely on confident expressions of urgency. Foreign policy outlets, high-profile thinkers, and former senior government officials churn out breathless analyses with dire warnings about how the United States might lose a “race for global AI primacy” and other claims of national fragility. An evolving set of technologies, only a fraction of whose potential is expressed through the chatbots visible to most users, is dropped into clichéd narratives of a narrowing window in which the United States must act decisively or risk losing an edge. While that does not mean the technology itself is a mirage, even Nvidia, which has experienced a generational shift in market valuation at warp speed, is suffering from changes driven partly by an unsustainable momentum and narrative.

The result is a cycle in which financial momentum and technological progress are often conflated, making it harder to distinguish durable innovation from speculative excess. This excess was on display recently with the news that a Chinese company, DeepSeek, had released a high-performing new model trained on only a fraction of the computing power of its competitors. Much of the reporting demonstrated a mix of confusion, limited information, and an early misreading of DeepSeek’s accompanying paper, all of which was then recycled by outlets further up the line.

In the responses to the announcement of DeepSeek’s r1 model, as with so many other single technical accomplishments, the missing ingredient is often patience. Sustainable breakthroughs take time; over-indexing on a single innovation will distort policy in ways that could harm both users and future tech development. Distinguishing meaningful innovation from hype requires scrutiny; failure to do so risks channeling ever increasing attention and capital toward dead ends and emphasizing commercialization over real research and design breakthroughs. The market, the policy environment, and even the technologies themselves demand a more disciplined approach and greater scrutiny from both the public and private sectors.

The career-breaking volumes of capital being poured into AI hardware and infrastructure should raise sharp questions about sustainability as firms invest at a scale detached from clear paths to profitability. Meanwhile, the broader AI ecosystem is shaped by investors, corporate leaders, and public voices with strong incentives to sustain a narrative of inevitable success. This alignment of interests has blurred the distinction between technological advancement and market exuberance.

The systems problem in AI

There is no fixed formula governing the relationship between computing resources, training techniques, and AI model performance. Computing power, the design and bandwidth of connections between chips and data centers, and the speed and size of memory are not fixed. Moreover, they do not function independently of the training methods and data-labeling techniques used to produce AI systems or the different technical approaches to how models are deployed, combined, and queried. Together, all of these choices produce an AI system.

As hardware scales and software optimizations improve, model performance shifts in ways that are difficult to predict. The field has broadly converged on the need for vast datasets and, in many (but certainly not all) cases, ever-larger parameter counts. Yet, the simple equation of “more computing power equals better models” overlooks the complexities that matter most. AI development is not a matter of turning up the frame rate on a video game—it is a systems problem. That is, it requires understanding the behavior of many different technologies and how they interact both in theory and in the harsh light of practice.

There are only a handful of infrastructure developers, operators, model researchers, and builders who are driving the current era of AI. Their technical approaches and “bet the farm” investments represent a range of assumptions, not an absolute consensus. For example, more decisive than just the speed of compute is how models are broken up and distributed across the computing infrastructure of AI—chips, racks, and entire data centers—during training. Differences in managing this distribution help define key competitive lines among designers, such as Nvidia, AMD, and Intel, and among cloud providers, such as Microsoft, Google, and Amazon.

Where it is that generative AI models think, referred to as “inference,” is another fault line, with Apple and Qualcomm prioritizing on-device processing while “pure” AI firms, such as OpenAI, Anthropic, and DeepSeek, build models that depend on centralized cloud infrastructure. Decisions around context—how much information a model retains across interactions or shares with other models in an ensemble—influence both performance and cost. Profitability remains an open question, as well: the economic calculus of AI looks different for a company like Meta, for which the model itself is not the secret sauce but a channel to other technology products, than for a firm like Renaissance Technologies, which focuses on AI for its own specialized, high-margin applications.

These questions are in flux, and the answers will not be the same for every model or company. Nvidia has become the defining technology investment of the moment, with institutional investors, hedge funds, and retail traders all heavily exposed to its trajectory. But its meteoric rise in this decade stems from choices in the last two. In 2007, Nvidia launched CUDA, which is now a market-defining software package. In 2019, it acquired Mellanox, a high-performance chip design and technology company. Both decisions helped Nvidia, which was misunderstood by many as a hardware firm for Bitcoin enthusiasts and gamers, make a deliberate turn to the data center.

Patience is all you need

The DeepSeek episode helps to highlight the importance of patience in the analysis of new technology developments in a domain that is still unsettled not just in terms of what problems the technology is being used to solve but how. It is important for policymakers to be able to access analysis that prizes long-term understanding and maps the growth of ecosystems around a technology instead of hyping acts of singular invention.

First, analysis of AI technological developments must be put into context. AI is a systems problem: unprecedented speed on a chip creates new bottlenecks in networking bandwidth; larger models demand more memory; new training techniques mean building new models and new training time.

Second, policy analysts have to acknowledge uncertainty in how the benefits of different AI capabilities are combined into expressions of national power. The potential fallacy of the “arms race” metaphor is that all participants have a shared understanding of how those arms might be employed. But scholars have already highlighted how fragile and divergent that understanding can be, even for relatively mature technologies. Being first to claim illusory control confers little lasting strategic advantage.

Finally, policymakers need to recognize the distorting effects of a bubble on the state of the debate. There is no one path of development in AI, no single end state or “win condition.” On its best day, what is presented as AI is a fractious basket of commercial technologies, open- and closed-source software, computing infrastructure, and research projects being combined in ever more clever permutations. “Winning” looks wildly different across companies, countries, and user communities. The distorting effects of the bubble appear to create certainty where there is little to be had, a sense of urgency in all things when it is not always warranted. 

Patience is the missing ingredient, the real disruptive trend, in the systems problem for AI. Policy requires sustained attention for effective outcomes and mitigation of risk. Markets demand greater scrutiny of lurid claims and medium-term trajectory. But distinguishing signal from noise remains essential.

Even for the most intelligent systems—human and artificial—that still takes time.


Trey Herr is senior director of the Cyber Statecraft Initiative (CSI), part of the Atlantic Council Technology Programs, and assistant professor of global security and policy at American University’s School of International Service.

Disclosure: Several companies mentioned in this article—Nvidia, Meta, Google, Amazon, and Microsoft—are donors to the Atlantic Council Technology Programs. This article, which did not involve these donors, reflects the author’s views.

The post What’s missing from the AI debate? Patience. appeared first on Atlantic Council.

]]>
At the Paris AI Action Summit, the Global South rises https://www.atlanticcouncil.org/blogs/new-atlanticist/at-the-paris-ai-action-summit-the-global-south-rises/ Thu, 13 Feb 2025 17:39:18 +0000 https://www.atlanticcouncil.org/?p=825291 The summit reflected a growing consensus that AI’s future must be both innovative and rooted in shared prosperity.

The post At the Paris AI Action Summit, the Global South rises appeared first on Atlantic Council.

]]>
On February 11, heads of state convened in Paris’s Grand Palais for the third AI Safety Summit, or the “AI Action Summit” as France has eponymized it. In contrast to the first summit in London in 2023 and the second in Seoul in 2024, the United States and the United Kingdom did not sign onto the communiqué this year. Instead, this week’s summit saw the breakdown of the agreement from Bletchley Park and the drift into the “third way” approach emphasizing strategic independence. With the United States’ cessation of partnerships in favor of leadership, the charge on affirmative technological sovereignty has gained ground.

What this has resulted in is the beginnings of a drift from traditional power centers toward a multi-stakeholder, collaborative approach on artificial intelligence (AI). With India as co-chair, the summit demonstrated that nations from the Global South are not just participants but architects of the emerging AI order. Newly announced initiatives and commitments to open-source development reflect a growing consensus that AI’s future must be both innovative and rooted in shared prosperity.

Why France sees an ally in India

France’s invitation to India to co-chair the AI Action Summit benchmarked both the evolving scope of the Indo-French Strategic Partnership, as well as increasing alignment at the European Union (EU) level on digital regulation.

The twenty-fifth anniversary of France and India’s bilateral strategic partnership saw a flurry of engagements between leaders from the two countries: Indian Prime Minister Narendra Modi was the guest of honor at Bastille Day in 2023. In return, French President Emmanuel Macron was chief guest at India’s Republic Day in January 2024. Furthermore, in July 2023 the Indian Ministry of Electronics and Information Technology (MeitY) and the French Ministry of Economy, Finance, and Industrial and Digital Sovereignty signed a Memorandum of Understanding on digital cooperation, spanning electronics manufacturing, high-performance computing, AI, and digital public infrastructure, among other areas.

At the EU level, the Digital Markets Act (DMA) and Digital Services Act (DSA) both took full effect in early 2024, and the EU AI Act has begun its gradual entry into force starting in August 2024. The DMA promotes fair competition in the marketplace of digital services, while the DSA is a consumer-centric regulation geared toward stemming illegal or harmful content online. In June of the same year, the French Competition Authority (Autorité  de la Concurrence) issued its opinion on the generative AI sector, noting the high level of vertical integration among major generative AI players. The advantage for these companies, the opinion states, “is reinforced by their integration across the entire value chain and in related markets, which not only generates economies of scale and scope, but also guarantees access to a critical mass of users.” In other words, generative AI is dominated in effect by a small handful of companies, who exert a great amount of control at all levels of the value chainfrom data and chips, to cloud services, developer hubs, and applications.

The Competition Commission of India is pondering its own version of the DMA, while MeitY is considering a Digital India Act that would outline rules on ethical development of AI, building on the AI for All framework set out in India’s National AI Strategy in 2018.

India, which is no stranger to jugaad, or frugal innovation, is also seeking its own DeepSeek moment, especially as it finds itself in the lowest tier of the United States’ Regulatory Framework for the Responsible Diffusion of Advanced Artificial Intelligence Technology. Most recently, the IndiaAI Mission has put out a call for proposals to build indigenous foundational AI models, and its Union Budget allocated record amounts to AI initiatives, including an additional 200 crore rupees (approximately $23 million) to AI Centers of Excellence and 2,000 crore rupees (approximately $230 million) to the IndiaAI Mission. Not surprisingly then, Modi in his opening speech in Paris declared that “governance is not just about managing risks and rivalries, it is also about promoting innovation and deploying it for the global good.”

Finally, India is a major player in global AI policy debates, through the myriad of partnerships that it has fostered over the past decade-plus. This includes the Group of Twenty (G20), the BRICS grouping, the Global Partnership on AI, and I2U2 (India, Israel, the United Arab Emirates, and the United States), to name a few. Given its influence, India is also a frequent fixture at Group of Seven (G7) summits, home of the Hiroshima AI Process.

What the summit achieved

Ahead of this week’s Paris Summit, the first International AI Safety Report, a key deliverable from the Bletchley Park process spearheaded by AI pioneer Yoshua Bengio, outlined a sweeping agenda. The report contained some key implications for international partnerships on AI, a few of which are highlighted below:

  1. There is a research and development (R&D) divide. The report identifies a “global R&D divide,” stating that there is insufficient evidence that infrastructure investment and AI training programs in low- and middle-income countries are effective. This means that there are more factors than the availability of infrastructure and skilled workforces driving the current concentration of R&D in a handful of countries.
  2. Technical risk management approaches must be standardized: The report notes the limitations of existing technical methods of risk identification, mitigation, and monitoring. While the network of AI Safety Institutes is a first step toward standardization of AI risk management, the future of the network is uncertain as the United States’ policy priorities are shifting toward unfettered innovation.
  3. There are trade-offs between competition and AI risks: In the interest of “staying ahead,” governments and companies may deprioritize safety in favor of rapid AI development. International partnerships should mitigate this through cooperative agreements that balance innovation with safety​.
  4. Early warning systems are essential in an unpredictable technological landscape: The report highlights the “evidence dilemma” faced by policymakersthe need for a critical mass of incidents of AI harms before regulations can be implemented. However, due to the widespread and rapid implementation of AI, including for determining access to critical services, the report stresses the need for early warning systems and frameworks, as waiting for stronger evidence weakens governments’ abilities to protect their societies.

The report also notes the rapid advancement of general-purpose AI models, although DeepSeek has since challenged some of the underlying assumptions on the resource intensiveness of building these models. Nonetheless, the point on progress stands. Large language models (LLMs) have gone from generating gibberish that barely approximated human speech, to “PhD-level” intelligence that is outpacing most LLM benchmarking tools.

The Paris Summit did attempt to address one key criticism of the 2023 Bletchley Park Summitthat Global South representation was symbolic at best. Ninety countries were invited to Paris, with nearly one thousand participants from all sectors. The Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet has sixty signatories, including the African Union. Macron also announced the launch of Current AI, a $400 million “public interest AI platform and incubator” backed by public and private entities in France, India, Germany, Chile, Kenya, Morocco, Nigeria, Finland, Slovenia, and Switzerland. This initiative complemented the summit’s emphasis on open-source, “democratized” AI, as Europe, as well as India and other players in the Global South, hinge their hopes of an AI boom on this mode of AI development.

The challenge now is to translate these dialogues—from London to Seoul and now in Paris—into concrete, lasting frameworks that ensure AI serves as a force for global good. The next host for this summit series has yet to be announced. But as Paris has proved, the commitment, resources, and priorities of the host determine the summit’s successes and failures, as well as the level of buy-in from its guests. Countries that choose not to engage do so at the peril of isolating themselves as a global consensus forms on the future of AI.


Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

The post At the Paris AI Action Summit, the Global South rises appeared first on Atlantic Council.

]]>
Dispatch from Dubai: Trump is seeking to upend the global chessboard https://www.atlanticcouncil.org/content-series/inflection-points/dispatch-from-dubai-trump-is-seeking-to-upend-the-global-chessboard/ Wed, 12 Feb 2025 12:00:00 +0000 https://www.atlanticcouncil.org/?p=824897 On the sidelines of the World Governments Summit, an important debate is taking place between those who are optimistic about US President Donald Trump’s second term and those who are more wary.

The post Dispatch from Dubai: Trump is seeking to upend the global chessboard appeared first on Atlantic Council.

]]>
DUBAI—A traditional view of geopolitics frames it as a multidimensional chessboard, designed for grand masters and great powers, in a competition that plays out over generations. 

On the sidelines of the World Governments Summit (WGS) here, a senior Middle Eastern official smiles and says that US President Donald Trump seems determined to upend that board early in his second term, then rearrange the pieces for an accelerated competition with rules of his own making.

Between the optimists and the pessimists

For those here who view this shift optimistically, Trump is a leader whose nonideological, transactional pragmatism could result in “great-power deconfliction” at a moment when it is most needed. Trump can, the thinking goes, negotiate better arrangements with Iran, China, and Russia that could defuse interlocking dangers across three regions that have made 2025 one of the most perilous years of our lifetimes. 

For the pessimists, Trump’s failure to consider the second-order impacts from his flurry of early actions and pronouncements is perilous stuff. He’s undervaluing alliances and important partnerships, they say, through ill-considered tariffs and proposals aimed at emptying Gaza, which could send large numbers of Palestinian refugees into neighboring countries. That could produce instability in the region, prompt some friends to hedge their bets, and encourage adversaries to seek advantage.

WGS, an annual event that this year includes some four thousand participants from 150 countries, is a good place to sound out the response to the early days of the Trump administration and its potential geopolitical consequences. (I came to Dubai as part of the Atlantic Council’s knowledge partnership with WGS, where the Council is hosting the Geotechnology and Policy Forum, focused this year on commercial space issues.)

For the optimists, Trump’s determination to end Russian President Vladimir Putin’s war in Ukraine, a “killing field,” as the US president vividly described it at the World Economic Forum in Davos, could help stabilize Europe. If he achieved an end to the war properly, Trump could secure Ukrainian sovereignty and its Western integration, though at the cost of some territory.

His willingness to engage with China to find a better trade and economic outcome is particularly popular in this region, where Saudi Arabia and the United Arab Emirates (UAE) are among China’s top energy suppliers. Middle Eastern officials see as olive branches Trump’s relatively modest 10 percent tariffs on China, instead of the threatened 60 percent, his invitation to Chinese President Xi Jinping to attend his inauguration, and his executive order pausing the TikTok ban.

The officials I have spoken with here see an additional positive signal regarding China from the influence wielded by billionaire businessman Elon Musk, who will be speaking here by video link on Thursday. Musk has deep investments in the country and ties to Chinese leadership.

On Iran, a country that for the first time this month sent four navy vessels to the UAE for a meeting with Emirati naval vessels, officials here note that Trump has said that he would prefer a deal with Iran to “bombing the hell out of it.” Though no officials here would say this on the record, they much prefer the tougher Trump approach to Iran over what they’ve seen in Democratic administrations, but sans military escalation.

Great expectations for AI

The problem here comes with a growing perception that countries in the region, including Saudi Arabia and the UAE, can’t rely on a consistent US commitment. The consensus view is that US focus on the region has been in relative decline since the Obama administration, except for a surge of military and political support for Israel after the October 7, 2023, Hamas terror attacks.

At the same time, the desire for US consistency and commitment grows in rough proportion to each additional investment from the two countries. Saudi Arabia and the UAE have invested tens of billions of dollars—soon to be hundreds of billions of dollars—in US companies, and artificial intelligence (AI) ventures in particular.

The UAE has made clear its intention to be a global AI investment and development leader. Just this week, Bloomberg reported that MGX, the UAE’s tech investment vehicle that has already invested in OpenAI and xAI, is in talks to invest in San Francisco–based Anthropic, which developed the popular chatbot named Claude. What once was regional dependence on US security guarantees has now become a big bet on AI, at the expense of building deeper relationships with China.

A tale of two Trump experiences

The Middle East has seen Trump both let it down and exceed expectations. On the disappointment front, officials harken back to when the Trump administration failed to respond in September 2019 after Iranian-made drones attacked oil-processing facilities in Abqaiq and Khurais in eastern Saudi Arabia.

Yet just a few months later, in January 2020, Trump ordered an audacious drone strike that killed Qasem Soleimani, an Iranian commander who was thought to be the second most powerful individual in the country. When it announced his death, the Pentagon said that Soleimani was responsible for the deaths of hundreds of US and coalition soldiers.

Middle Eastern officials took away from those two experiences that the Trump administration would act decisively to defend what it interpreted as its own interests—but not to respond to an attack on its ally’s oil facilities.

Two years later, in January 2022, US-UAE relations hit a new low, Middle Eastern officials say, when US President Joe Biden at first failed either to call UAE leaders or to offer assistance following an attack on Abu Dhabi by the Houthi movement, an Iranian terrorist proxy group, using missiles and drones that killed three people and injured six others.

What gave the region the most confidence in the Trump administration was its successful efforts, alongside the Emiratis, to bring about the Abraham Accords, which began in September 2020 as bilateral normalization agreements between the UAE and Israel, and Bahrain and Israel.

Officials here describe their experience in concluding these agreements, which were brokered by the Trump administration, as mercifully free of the cumbersome bureaucracy they generally experience in dealing with Washington.

While officials I spoke with here understand how a degree of unpredictability can serve Trump in his relationship with adversaries, they are looking for more steadiness of purpose in security and economic relations. When it comes to AI, the question is just how far the United States will go in providing the Emiratis its most advanced technologies and GPUs, the graphics processing units crucial to AI advancement.

There is a sense here in Dubai that the new rules of the game—Trump’s rules—are clearer in his second term than they were in the first. The optimists here are quickly stepping up to invest in that expectation. The pessimists, however, linger in the background, asking their more optimistic friends whether it’s prudent to bet their future on the hope that Trump’s “America first” policies will also prioritize their interests and not just those of the United States.

The optimists like to quote Trump from his first Davos speech in 2018, when he said, “‘America first’ does not mean America alone.” Seven years later, they are wagering hundreds of billions of dollars that he means it.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on X: @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points newsletter, a column of dispatches from a world in transition. To receive this newsletter throughout the week, sign up here.

The post Dispatch from Dubai: Trump is seeking to upend the global chessboard appeared first on Atlantic Council.

]]>
Global China Hub nonresident fellow Hanna Dohmen in South China Morning Post https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-hanna-dohmen-in-scmp/ Tue, 11 Feb 2025 20:13:27 +0000 https://www.atlanticcouncil.org/?p=824304 On February 7th, 2025, South China Morning Post published an article referencing Global China Hub nonresident fellow Hanna Dohmen’s testimony for the US-China Economic and Security Review Commission on the effectiveness of export controls in slowing China’s AI advances.

The post Global China Hub nonresident fellow Hanna Dohmen in South China Morning Post appeared first on Atlantic Council.

]]>

On February 7th, 2025, South China Morning Post published an article referencing Global China Hub nonresident fellow Hanna Dohmen’s testimony for the US-China Economic and Security Review Commission on the effectiveness of export controls in slowing China’s AI advances.

The post Global China Hub nonresident fellow Hanna Dohmen in South China Morning Post appeared first on Atlantic Council.

]]>
Did DeepSeek just trigger a paradigm shift? https://www.atlanticcouncil.org/blogs/geotech-cues/did-deepseek-just-trigger-a-paradigm-shift/ Tue, 04 Feb 2025 19:05:51 +0000 https://www.atlanticcouncil.org/?p=823172 The release of DeepSeek's AI model may have far-reaching implications for global investment trends, regulatory strategies, and the broader AI industry.

The post Did DeepSeek just trigger a paradigm shift? appeared first on Atlantic Council.

]]>
DeepSeek stunned the artificial intelligence (AI) industry when it released its AI model, called DeepSeek-R1, claiming to have achieved performance rivaling OpenAI’s models while utilizing significantly fewer computational resources.

The bottom line is that DeepSeek has carved an alternative path to high-performance AI by employing a mixture-of-experts (MoE) model and optimizing data processing. Although these techniques are not completely novel, their successful application could have far-reaching implications for global investment trends, regulatory strategies, and the broader AI industry.

That said, questions remain about the true cost and nature of DeepSeek’s hardware and training runs. DeepSeek’s assertions should not be taken at face value, and further research is needed to assess the company’s claims, particularly given the number of examples of Chinese firms secretly working with the government and hiding state subsidies—particularly in industries the Chinese Communist Party considers strategically important.

The traditional AI development model

The prevailing AI paradigm has supported the development of ever-larger models trained on massive datasets using high-performance computing clusters. OpenAI, for example, has pursued increasingly expansive models, necessitating exponential growth in computational power and finances. OpenAI’s dense transformer models, such as GPT-4, are believed to activate all model parameters for every input token throughout training and inference, further compounding the computational burden.

However, this approach has diminishing returns: Increasing the model size does not always yield proportional improvements in performance. Additionally, with this traditional model, there are considerable resource constraints—access to high-end graphics processing units (GPUs) is limited due to supply chain bottlenecks and geopolitical restrictions. There are also high financial barriers. Large-scale training runs using OpenAI’s transformer architecture can require tens of millions of dollars in funding.

Rather than processing every input through a monolithic transformer, MoE routes queries to specialized sub-networks, enhancing efficiency. And by activating fewer parameters per computation, MoE models demand less power. This structure allows for easier expansion without requiring proportional increases in hardware investment.

Several research efforts have previously explored MoE architectures, but DeepSeek successfully deployed MoE in a way that optimized performance while minimizing computational cost.

DeepSeek also leveraged sophisticated techniques that reduced training time and cost. For example, its model was trained in stages, with each stage focused on achieving targeted improvements and the efficient use of resources. Additionally, its model employed self-supervised learning and reinforcement learning, leveraging the Group Relative Policy Optimization (GRPO) framework to rank and adjust responses (minimizing the use of labeled datasets and human feedback). And to compensate for potential data gaps, DeepSeek-V3 was fine-tuned on synthetic datasets to improve domain-specific expertise.

These techniques helped DeepSeek mitigate the inefficiencies associated with training on overly oversized, noisy datasets—a problem that has long plagued AI developers.

Implications

Important questions around the true cost of DeepSeek’s training and access to hardware notwithstanding, DeepSeek-R1 could mark a turning point in AI research. By leveraging MoE architectures and optimized training strategies, DeepSeek may have created a roadmap to achieve high performance without the prohibitive costs and inefficiencies of traditional dense models. Whether new capabilities and improvements can be unlocked by reconfiguring existing dense models like GPT-4 to take advantage of these techniques remains to be seen.

DeepSeek’s apparent success also raises crucial policy questions around the efficacy of export controls aimed at restricting Chinese access to high-performance hardware. If AI development becomes less reliant on cutting-edge GPUs and more focused on efficient architectures, these restrictions could lose their bite. It could also potentially disrupt major planned investments in data centers, many of which have been fueled by the OpenAI model of dense AI development. With DeepSeek’s resource-efficient paradigm as a new benchmark, organizations may need to reassess or restructure some of these investments to fit within that paradigm.

While further research is crucial to assess the significance of DeepSeek’s innovation, its emergence stands as a clear wake-up call to leading AI organizations, policymakers, and investors alike. Attention, perhaps, is not all you need.


Ryan Arant is the director of the N7 Research Institute at the Atlantic Council.

Newton Howard is founder and was the first chairman of C4ADS.

Further Reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Did DeepSeek just trigger a paradigm shift? appeared first on Atlantic Council.

]]>
Five AI management strategies—and how they could shape the future https://www.atlanticcouncil.org/blogs/new-atlanticist/five-ai-management-strategies-and-how-they-could-shape-the-future/ Tue, 04 Feb 2025 14:18:37 +0000 https://www.atlanticcouncil.org/?p=823011 To ensure the benefits of artificial intelligence (AI) are realized while minimizing the anticipated evils, it’s important to understand the different ways to approach AI governance.

The post Five AI management strategies—and how they could shape the future appeared first on Atlantic Council.

]]>
It is a truth universally acknowledged, if not often interrogated, that artificial intelligence (AI) is in need of governance. This stems from its perceived risks, such as it becoming superintelligent and taking over, threatening employment, exacerbating biases, increasing tech monopolies, spreading disinformation, violating intellectual property rights, and supporting the schemes of bad actors. Even more risks, both real and imagined, may emerge as AI continues to improve. The latest paradigms of large language models (LLMs) and generative AI, for example, are trumpeted as likely game changers in science, government, crime, entertainment, warfare, industry, and management.

But it’s important for policymakers working on AI governance to keep their eyes on the prize. Even factoring in the hype cycle, the potential for AI to improve the lives of individuals, communities, and societies across the globe cannot be ignored and shouldn’t be traded off against tail risks.

AI governance needs to ensure that such goods are realized while minimizing the anticipated evils. The measures that can achieve this include statutory regulation, institution construction, technical protocols and standards, economic incentives, and codes of practice. It’s a complicated issue.

In 2021, we published our book Four Internets, arguing that the internet’s governance was conceived and driven by a range of moral and political considerations. In particular, four ideal types of governance, reflecting geopolitical and ideological considerations, could be detected, each creating an internet of its own. In aggregate, these four internets comprise the global network used throughout the world. Traffic between these different internets is neither seamless nor impossible, but they are increasingly run on divergent lines.

The four internets are: 

  • The Open Internet, the original conception of a collaborative, permissionless, transparent, and flexible space with anonymity, interoperability, and free flows of information 
  • The Bourgeois Internet, where civility and rights are preserved by regulation 
  • The Paternal Internet, where certain outcomes of internet use (for instance, political speech or pornography) are prohibited
  • The Commercial Internet, regulated as property to produce market solutions for collective action problems

A fifth ideal type, a spoiler model based on the hacker ethic, valorizes the power of coders to challenge authority and undermine security. The spoiler doesn’t create an internet of its own, but is parasitic on the others, undermining their safeguards and subverting their ideals.

Governments and organizations may emphasize one or another of these ideals, but they are not mutually exclusive. They are deployed alongside each other, each privileging competing considerations which are negotiated in political processes.

The world has changed since we published our book in 2021, yet the framework remains relevant—and not only to the internet. AI is dependent on the internet for data to train LLMs, cloud computing power, and user access. It is no coincidence that internet companies are driving the generative AI revolution.

A taxonomy of AI governance

The AI governance regime is evolving, and fortunately it is focused on predictable or evident risks, not speculative existential threats. Governments can legislate, and some have—China is strongly concerned, the European Union (EU) has been somewhere in the middle, the United Kingdom and the United States have legislated minimally, while the United Arab Emirates and Japan are wary of hampering development. New institutions, such as the EU’s AI Office and Britain’s AI Safety Institute, have emerged. Supranational groupings foster cooperation and standards, such as the United Nations AI Advisory Body, or the Group of Seven’s Hiroshima Process, and alongside these has been a tsunami of summitry and experience sharing. The combination of government regulation, global policy frameworks, research and testing infrastructure, and best practices will gradually coalesce into a recognizable AI governance regime with established norms and shared principles.

In this shuffle, we see the repurposing of the ideal types of governance of the Four Internets framework as governance strategies in the AI context, which we term Artificial Intelligence Management Strategies, or AIMS. The five AIMS are:

  • Open AIMS: collaborative and shared innovation for the public good
  • Bourgeois AIMS: achieving the potential of AI only when rights and civility are secured
  • Paternal AIMS: setting limits to the outcomes of AI applications
  • Commercial AIMS: letting markets and investors predict where future profits will emerge
  • Hacking AIMS: unleashing the potential of the software to challenge authority

What can the Five AIMs framework tell us about AI governance? There are three sets of questions for which the identification of AI management strategy is pertinent.

First, AI may not be uniquely responsible for certain risks and harms. For example, it may turbocharge the creation and dissemination of fake news, but disinformation existed before AI and will continue to be created without it. In that case, rules of the road focused on AI, rather than the problem at hand, are unlikely to make it go away. 

Second, what kind of AI causes the problem? AI is evolving, neither fixed nor mature. Focusing too strongly on generative AI as it exists now is likely to miss the moving target.

Third, what exactly is to be regulated? There are several components of the AI ecosystem, including applications, models used, technology used for development, the infrastructure that implements the technology, and training data. Which of these would be appropriate to regulate for the problem, and to what extent can regulations be effectively enforced?

In all these cases, the essentials of AI governance should be properly framed. Paternal AIMS are concerned about specific outcomes of AI’s use, Bourgeois AIMS by the development process, Open AIMS looks to produce social good, Commercial AIMS to create profit, and Hacking AIMS to exercise power against authority. 

The goals of governance are not always precisely specified, and strategies may simply be performative or reactive to perceived risk. But generative AI is immature. Its potential is clear, but so far it has yet to deliver. At worst, poorly targeted regulation or badly crafted principles may hinder development of beneficial and powerful AI-informed methods for addressing genuine problems. Might privacy concerns prevent the use of personal data for social good? Might apprehension about high-risk applications check progress in the medical field? Might worries about hard-to-explain black boxes curb the use of AI in administration?

And ultimately, might excessive regulation in risk-sensitive jurisdictions suppress innovation, to the detriment of technological advancement, or raise barriers to entry so that the technology cannot be distributed equitably beyond the wealthiest parts of the world?

Setting out our AIMS clearly is an essential first step in avoiding these pitfalls.


Kieron O’Hara is an emeritus fellow, University of Southampton. His latest book, Blockchain Democracy: Ideology and the Crisis of Social Trust, will be published in June. He can be reached at kmoh@soton.ac.uk.

Wendy Hall is Regius Professor of Computer Science, University of Southampton. She was a member of the UN High-Level Advisory Body on AI and is a nonresident senior fellow at the Atlantic Council’s GeoTech Center.

This article is part of the Atlantic Council GeoTech Center’s AI Connect II project.

The post Five AI management strategies—and how they could shape the future appeared first on Atlantic Council.

]]>
DeepSeek poses a Manhattan Project–sized challenge for Trump https://www.atlanticcouncil.org/content-series/inflection-points/deepseek-poses-a-manhattan-project-sized-challenge-for-trump/ Thu, 30 Jan 2025 12:01:00 +0000 https://www.atlanticcouncil.org/?p=822035 Reports of an artificial intelligence breakthrough by a Chinese company should be a wake-up call for the United States.

The post DeepSeek poses a Manhattan Project–sized challenge for Trump appeared first on Atlantic Council.

]]>
US presidents rarely get to choose the challenges that define their place in history. So it will also be for President Donald Trump, for all his efforts to set the agenda during his first two weeks in office.

It’s a fair bet that a century from now, Trump’s early emphasis on immigrant deportations, tariffs, Greenland, and Panama won’t be as long-remembered as whether he undermines, sustains, or increases the United States’ global standing in relation to China and its autocratic allies.

Little will be more significant in that effort than whether the United States can rule the commanding heights of technological change. It’s that question that should weigh most heavily on Trump as he considers how to manage the artificial intelligence (AI) race with China following this week’s news that the Chinese company DeepSeek has achieved AI results as good or better than some American models at lower cost and apparently without the most advanced chips.

A couple of days before Trump’s inauguration, outgoing National Security Advisor Jake Sullivan talked with Jim VandeHei and Mike Allen of Axios about the catastrophic risk in losing this contest. Their interview didn’t get the attention it deserved, so read about it here now if you haven’t already.

What VandeHei and Allen took away from the conversation with Sullivan was that “[s]taying ahead in the AI arms race makes the Manhattan Project during World War II seem tiny, and conventional national security debates small. It’s potentially existential with implications for every nation and company.”

Sullivan would be the first to concede the flaws in the comparison between the AI race and the race to a nuclear weapon. The Manhattan Project involved an array of technical problems that were frontier physics with a clear government coordinator, whereas with AI, the challenges are largely being solved in universities or commercial research labs, without the power of the US government coordinating.

That said, the point of the comparison is that the outcome of the race could have generational consequences of similar magnitude around what country or set of countries sets the rules for the future. 

Distilling Sullivan’s comments, the Axios authors add: “America must quickly perfect a technology that many believe will be smarter and more capable than humans. We need to do this without decimating U.S. jobs, and inadvertently unleashing something we didn’t anticipate or prepare for. We need both to beat China on the technology and in shaping and setting global usage and monitoring of it, so bad actors don’t use it catastrophically. Oh, and it can only be done with unprecedented government-private sector collaboration—and probably difficult, but vital, cooperation with China.”

There are some Chinese advantages, underscored by DeepSeek, that the United States will find difficult to match. As Yuan Gao and Vlad Savov of Bloomberg explain, “The country has a deep pool of highly skilled software engineers, a vast domestic market and government support in the form of subsidies as well as funding for research institutes. It also has a pressing necessity to find a way to do more with fewer resources.” They could have added that China’s principal advantage is massive, unfettered data access without any of the complications of privacy concerns.

Most of all, the Chinese government and its private companies work hand in glove. This is perhaps the biggest challenge for the United States, and it is also the biggest difference between now and during the Manhattan Project.

Sullivan told Axios that unlike previous tech breakthroughs where the United States found a way to lead—atomic weapons, space travel, and the internet—AI development “sits in the hands of private companies with the power of nation-states,” VandeHei and Allen write.

What does this difference mean? To begin with, the US government will have to work more effectively with private tech companies than ever before if the country is to sustain its early AI lead and shape global regulations around it. Trump will also need his democratic allies on board. Unfortunately, many of these allies are busy at the moment hatching approaches to counter Trump’s tariff threats and, in Europe, weighing how to respond to his aspirations to gain control of Greenland, an autonomous territory of the Kingdom of Denmark.

It doesn’t take cutting-edge AI to decipher that the new administration already has daunting and far-reaching choices before it.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on X: @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points newsletter, a column of dispatches from a world in transition. To receive this newsletter throughout the week, sign up here.

The post DeepSeek poses a Manhattan Project–sized challenge for Trump appeared first on Atlantic Council.

]]>
The West must study the success of Ukraine’s Special Operations Forces https://www.atlanticcouncil.org/blogs/ukrainealert/the-west-must-study-the-success-of-ukraines-special-operations-forces/ Thu, 30 Jan 2025 01:32:53 +0000 https://www.atlanticcouncil.org/?p=822020 The success of Ukraine’s Special Operations Forces in the war against Russia can provide a range of valuable lessons for Kyiv's Western partners that will shape military doctrines for years to come, writes Doug Livermore.

The post The West must study the success of Ukraine’s Special Operations Forces appeared first on Atlantic Council.

]]>
Since the onset of Russia’s full-scale invasion in 2022, much has been written about the extensive training provided to the Ukrainian military by the country’s Western partners. However, the West also has much to learn from Ukraine’s unique military experience. In particular, the successes of Ukraine’s Special Operations Forces provide a range of valuable lessons for their Western counterparts that will shape military doctrines for years to come.

The effectiveness of Ukraine’s Special Operations Forces can be largely attributed to their exceptional adaptability in rapidly changing battlefield conditions. When Russia launched its full-scale invasion in February 2022, Ukrainian SOF units quickly adjusted to meet the immediate challenges of high-intensity conflict against a far larger and better armed enemy.

This adaptability has manifested in several crucial ways. The rapid reconfiguration of small unit tactics to counter Russian mechanized forces has been particularly noteworthy, as has the development of innovative solutions to overcome numerical disadvantages. Ukrainian SOF units have consistently shown their ability to adopt new technologies and tactics based on battlefield feedback. Perhaps most importantly, they have implemented flexible command structures that enable decentralized decision-making, allowing for rapid responses to emerging threats and opportunities.

Ukraine’s ability to adapt has been further demonstrated through the innovative use of civilian infrastructure and technologies. Ukrainian SOF units have effectively incorporated commercial drones, civilian communications networks, and other non-military technologies, showing remarkable creativity in overcoming resource constraints.

One of the most significant lessons from the conflict has been the effective integration of SOF units with conventional military forces engaged in large-scale combat operations. Ukrainian SOF units also played a vital role in preparing the battlefield before and during the initial phases of the invasion. They established networks of resistance, gathered intelligence, and identified key targets that would later prove crucial for conventional forces.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Ukraine’s achievements since 2022 have owed much to years of solid preparations. Following Russia’s occupation of Crimea in 2014, Ukrainian Special Operations Forces underwent significant transformation with assistance from NATO countries, particularly the United States, United Kingdom, and Canada. Between 2015 and 2021, Ukraine also implemented major structural reforms to align with NATO standards, including the establishment of dedicated SOF training centers.

These steps helped lay the foundations for a sophisticated network of resistance capabilities across potential invasion routes by early 2022. Ukrainian SOF units mapped key infrastructure, identified potential targets, and established relationships with local civilian networks, while developing protocols for rapid information sharing between SOF units, conventional forces, and civilian resistance elements. These preparations proved vital, enabling Ukrainian forces to target Russian supply lines, command nodes, and communications systems using real-time intelligence.

Throughout the invasion, coordination between Ukrainian SOF units and conventional forces has enabled effective combined arms operations. SOF units frequently act as forward observers, providing targeting data to artillery units and conducting battle damage assessments. The ability to rapidly share intelligence has been particularly important in urban environments, where the complexity of the battlefield requires close cooperation between different military elements.

Russia’s invasion has reinforced the importance of unconventional warfare in modern conflicts. Ukrainian SOF units have successfully employed various unconventional warfare techniques that have had strategic impacts far beyond their tactical execution.

Ukraine’s implementation of guerrilla tactics and sabotage alongside partisans has been highly effective, with numerous successful operations conducted behind enemy lines. This has included the disruption of Russian supply lines, targeting of key military infrastructure and command centers, and the execution of precision strikes on high-value targets.

The psychological aspect of warfare has proven equally important, with Ukrainian SOF units making significant contributions to information warfare campaigns that have influenced both domestic and international audiences. They have conducted deception operations that have complicated Russian planning and operations, while also executing morale operations targeting both enemy forces and occupied populations.

The successful integration of modern technology has been a key characteristic of Ukrainian SOF operations. Despite facing a far wealthier and numerically superior adversary, Ukrainian SOF units have leveraged various technological capabilities to maintain operational effectiveness. They have utilized commercial technologies for reconnaissance and surveillance, integrated drone operations into tactical planning and execution, and leveraged artificial intelligence and big data analytics for targeting and planning.

Ukraine’s SOF operations provide several critical lessons for the country’s Western partners. In terms of doctrine development, it is clear that military organizations must emphasize flexibility and adaptability in force structure and training, while integrating SOF capabilities more deeply in support of conventional forces.

The importance of technological integration and adaptation cannot be overstated. Future military forces must be prepared to operate in environments where commercial technology plays an increasingly important role, and where the ability to utilize these technologies can provide crucial advantages. In terms of equipment, Western planners should focus on communications jamming and interception, improved surveillance and reconnaissance capabilities, and integrating AI tools to aid in intelligence collection and analysis.

The role of Ukrainian SOF operations in the current war provides valuable insights for military forces worldwide. Their impact demonstrates the critical importance of adaptability and the effective use of technology in modern warfare. These lessons are particularly relevant as military organizations prepare for future high-intensity conflicts in increasingly complex operational environments.

Doug Livermore is national vice president for the Special Operations Association of America and deputy commander for Special Operations Detachment–Joint Special Operations Command in the North Carolina Army National Guard. The views expressed are the author’s and do not represent official US Government, Department of Defense, or Department of the Army positions.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values, and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia, and Central Asia in the East.

Follow us on social media
and support our work

The post The West must study the success of Ukraine’s Special Operations Forces appeared first on Atlantic Council.

]]>
Is DeepSeek a proof of concept? https://www.atlanticcouncil.org/blogs/econographics/sinographs/is-deepseek-a-proof-of-concept/ Wed, 29 Jan 2025 17:13:06 +0000 https://www.atlanticcouncil.org/?p=821800 Understanding how Deepseek emerged from China’s innovation landscape can better equip the US to confront China’s ambitions for global technology leadership.

The post Is DeepSeek a proof of concept? appeared first on Atlantic Council.

]]>
On Monday, the Chinese artificial intelligence (AI) application, DeepSeek, surpassed ChatGPT in downloads and was ranked number one in iPhone app stores in Australia, Canada, China, Singapore, the United States, and the United Kingdom. It dealt a heavy blow to the stocks of US chip makers and other companies related to AI development. DeepSeek claims to have achieved a chatbot model that rivals AI leaders, such as OpenAI and Meta, with a fraction of the financing and without full access to advanced semiconductor chips from the United States.

DeepSeek represents China’s efforts to build up domestic scientific and technological capabilities and to innovate beyond that. Its advanced stage further exacerbates anxieties that China can outpace the United States in cutting edge technologies and surprised many analysts who believed China was far behind the United States on AI. The export controls on advanced semiconductor chips to China were meant to slow down China’s ability to indigenize the production of advanced technologies, and DeepSeek raises the question of whether this is enough. The US-China tech competition lies at the intersection of markets and national security, and understanding how DeepSeek emerged from China’s high-tech innovation landscape can better equip US policymakers to confront China’s ambitions for global technology leadership.

Homegrown: China’s innovation ecosystem

In the past decade, the Chinese Communist Party (CCP) has implemented a series of action plans and policies to foster domestic capabilities, reduce dependency on foreign technology, and promote Chinese technology abroad through investment and the setting of international standards. In 2023, President Xi Jinping summarized the culmination of these economic policies in a call for “new quality productive forces.” In 2024, the Chinese Ministry of Industry and Information Technology issued a list in of “future industries” to be targeted. These slogans speak to the mission shift from building up domestic capacity and resilience to accelerating innovation.

Since the implementation of the industrial action plan “Made in China 2025” in 2015, China has been steadily ramping up its expenditure in research and development (R&D). From 2016 to 2024, R&D expenditure expanded by 126 percent. According to statistics released last week by the National Bureau of Statistics, China’s R&D expenditure in 2024 reached $496 billion. However, China still lags other countries in terms of R&D intensity—the amount of R&D expenditure as a percentage of gross domestic product (GDP).

Compared to other countries in this chart, R&D expenditure in China remains largely state-led. Rhodium Group estimated that around 60 percent of R&D spending in China in 2020 came from government grants, government off-budget financing, or R&D tax incentives. For reference, in the United States, the federal government only funded 18 percent of R&D in 2022. It’s a common perception that China’s style of government-led and regulated innovation ecosystem is incapable of competing with a technology industry led by the private sector. However, companies like DeepSeek, Huawei, or BYD appear to be challenging this idea.

China has often been accused of directly copying US technology, but DeepSeek may be exempt from this trend. While DeepSeek was trained on NVIDIA H800 chips, the app might be running inference on new Chinese Ascend 910C chips made by Huawei. Additionally, DeepSeek primarily employs researchers and developers from top Chinese universities. This is a change from historical patterns in China’s R&D industry, which depended upon Chinese scientists who received education and training abroad, mostly in the United States. DeepSeek also differs from Huawei and BYD in that it has not received extensive, direct benefits from the government. Instead, it seems to have benefited from the overall cultivation of an innovation ecosystem and a national support system for advanced technologies.

China’s science and technology developments are largely state-funded, which reflects how high-tech innovation is at the core of China’s national security, economic security, and long-term global ambitions. DeepSeek was able to capitalize on the increased flow of funding for AI developers, the efforts over the years to build up Chinese university STEM programs, and the speed of commercialization of new technologies.

While some AI leaders have doubted the veracity of the funding or the number of NVIDIA chips used, DeepSeek has generated shockwaves in the stock market that point to larger contentions in US-China tech competition. Chinese firms are already competing with the United States in other technologies. In 2015, the government named electric vehicles, 5G, and AI as targeted technologies for development, hoping that Chinese firms would be able to leapfrog to the front of these fields. Now, in 2025, whether it’s EVs or 5G, competition with China is the reality.

The United States, China, and global tech competition

Some AI watchers have referred to DeepSeek as a “Sputnik” moment, although it’s too early to tell if DeepSeek is a genuine gamechanger in the AI industry or if China can emerge as a real innovation leader. As far as chatbot apps, DeepSeek seems able to keep up with OpenAI’s ChatGPT at a fraction of the cost. But DeepSeek’s low budget could hamper its ability to scale up or pursue the type of highly advanced AI software that US start-ups are working on. Perhaps more importantly, such as when the Soviet Union sent a satellite into space before NASA, the US reaction reflects larger concerns surrounding China’s role in the global order and its growing influence.

Unlike the race for space, the race for cyberspace is going to play out in the markets, and it’s important for US policymakers to better contextualize China’s innovation ecosystem within the CCP’s ambitions and strategy for global tech leadership. The CCP strives for Chinese firms to be at the forefront of the technological innovations that will drive future productivity—green technology, 5G, AI. And Chinese firms are already promoting their technologies through the Belt and Road Initiative and investments in markets that are often overlooked by private Western investors.

While the United States and the European Union have placed trade barriers and protections against Chinese EVs and telecommunications companies, DeepSeek may have proved that it isn’t enough to simply reduce China’s access to materials or markets. It is uncertain to what extent DeepSeek is going to be able to maintain this primacy within the AI industry, which is evolving rapidly. However, it should cause the United States to pay closer attention to how China’s science and technology policies are generating results, which a decade ago would have seemed unachievable. DeepSeek indicates that China’s science and technology policies may be working better than we have given them credit for. For US policymakers, it should be a wakeup call that there has to be a better understanding of the changes in China’s innovation environment and how this fuels their national strategies.


Jessie Yin is an assistant director with the Atlantic Council GeoEconomics Center.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

The post Is DeepSeek a proof of concept? appeared first on Atlantic Council.

]]>
What DeepSeek’s breakthrough says (and doesn’t say) about the ‘AI race’ with China https://www.atlanticcouncil.org/blogs/new-atlanticist/what-deepseeks-breakthrough-says-and-doesnt-say-about-the-ai-race-with-china/ Tue, 28 Jan 2025 16:21:54 +0000 https://www.atlanticcouncil.org/?p=821383 DeepSeek’s achievement has not exactly undermined the United States’ export control strategy, but it does bring up important questions about the broader US strategy on AI.

The post What DeepSeek’s breakthrough says (and doesn’t say) about the ‘AI race’ with China appeared first on Atlantic Council.

]]>
This week, tech and foreign policy spaces are atwitter with the news that a China-based open-source reasoning large language model (LLM), DeepSeek-R1, was found to match the performance of OpenAI’s o1 model across a number of core tasks. It has reportedly done so for a fraction of the cost, and you can access it for free. 

The most impressive thing about DeepSeek-R1’s performance, several artificial intelligence (AI) researchers have pointed out, is that it purportedly did not achieve its results through access to massive amounts of computing power (i.e., compute) fueled by high-performing H100 chips, which are prohibited for use by Chinese companies under US export controls. Instead, it may have conducted the bulk of the training for this new model by optimizing inter-chip memory bandwidth of the less sophisticated H800s (allowing these less sophisticated chips to “share” the size of a very large model). This meant that training the model cost far less in comparison to similarly performing models trained on more expensive, higher-end chips. DeepSeek’s breakthrough has led some to question whether the US government’s export controls on China have failed. 

However, such a conclusion is premature. Other recent “breakthroughs” in Chinese chip technologies were the result not of indigenous innovation but developments that were already underway before export controls seriously impacted the supply of chips and semiconductor equipment available to Chinese firms. In late 2023, for example, US foreign policy observers experienced a shock when Huawei announced that it had produced a smartphone with a seven nanometer chip, despite export restrictions that should have made it impossible to do so. But rather than showcasing China’s ability to either innovate such capabilities domestically or procure equipment illegally, the breakthrough was more a result of Chinese firms stockpiling the necessary lithography machines from Dutch company ASML before export restrictions came into force. The influx of machines bought China time before the impact of export controls would be seen in the domestic market. 

Much of the conversation in US policymaking circles focuses on the need to limit China’s capabilities.

There is evidence to suggest that DeepSeek is benefiting from a similar dynamic. China AI researchers have pointed out that there are still data centers operating in China running on tens of thousands of pre-restriction chips. They also note that the real impact of the restrictions on China’s ability to develop frontier models will show up in a couple of years, when it comes time for upgrading. Or, it may show up after Nvidia’s next-generation Blackwell architecture has been more fully integrated into the US AI ecosystem.

While DeepSeek’s achievement has not exactly undermined the United States’ export control strategy, it does bring up important questions about the broader US strategy on AI. Much of the conversation in US policymaking circles focuses on the need to limit China’s capabilities—specifically by restricting its ability to access compute. While not wrong on its face, this framing around compute and access to it takes on the veneer of being a “silver bullet” approach to win the “AI race.” This kind of framing creates narrative leeway for bad faith arguments that regulating the industry undermines national security—including disingenuous arguments that governing AI at home will hobble the ability of the United States to outcompete China. 

Such arguments emphasize the need for the United States to outpace China in scaling up the compute capabilities necessary to develop artificial general intelligence (AGI) at all costs, before China “catches up.” This has led some AI companies to convincingly argue, for example, that the negative externalities of speed-building massive data centers at scale are worth the longer-term benefit of developing AGI. Such an argument has significant business upside for AI companies, as they amass greater numbers of chips to gain a competitive advantage. What the DeepSeek example illustrates is that this overwhelming focus on national security—and on compute—limits the space for a real discussion on the tradeoffs of certain governance strategies and the impacts these have in spaces beyond national security.

To plug this gap, the United States needs a better articulation at the policy level of what good governance looks like. This should include a proactive vision for how AI is designed, funded, and governed at home, alongside more government transparency around the national security risks of adversary access to certain technologies. It also requires the US government to be clear about what capabilities, technologies, and applications related to AI it is specifically aiming to regulate. This would help to elevate conversations on risk and enable communities of practice to come together to establish adaptive governance strategies across technological, economic, political, and social domains—as well as for national security.

How to best develop, deploy, and govern AI-enabled technologies is not a question that can be answered with “silver bullet” solutions. Rather, it is a process, one that requires consistent, thoughtful engagement from practitioners and experts across a wide variety of issue sets and backgrounds. No one strategy will win the “AI race” with China—and as new capabilities emerge, the United States needs a more adaptive framework to meet the challenges these technologies and applications will bring. 


Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab), where she leads China programming for the Democracy + Tech Initiative.

The post What DeepSeek’s breakthrough says (and doesn’t say) about the ‘AI race’ with China appeared first on Atlantic Council.

]]>
DOGE should use AI to fix environmental review https://www.atlanticcouncil.org/blogs/energysource/doge-should-use-ai-to-fix-environmental-review/ Mon, 27 Jan 2025 13:57:26 +0000 https://www.atlanticcouncil.org/?p=820937 The National Environmental Policy Act's (NEPA) often lengthy process can delay crucial development projects and job creation. To address this, Trump’s newly established Department of Government Efficiency should leverage AI technologies to accelerate environmental reviews, modernizing the administration of NEPA.

The post DOGE should use AI to fix environmental review appeared first on Atlantic Council.

]]>
The recently conceived Department of Government Efficiency (DOGE), headed by Elon Musk, is the big, new Trump administration idea on the block for cutting costs and making government work better. It should tackle a problem of government inefficiency that is holding up investment and job creation associated with development projects of many kinds, including siting clean energy and connecting it to a grid.

DOGE should focus its tech talent on making the National Environmental Policy Act (NEPA) work the way it was intended: to make federal decision-making sensitive to environmental impacts but not create the byzantine paperwork exercise that haunts many projects. To do that, DOGE should leverage artificial intelligence (AI) technologies to streamline bureaucratic processes.

STAY CONNECTED

Sign up for PowerPlay, the Atlantic Council’s bimonthly newsletter keeping you up to date on all facets of the energy transition.

NEPA doesn’t need to be so cumbersome

On January 1, 1970, then-President Richard Nixon signed NEPA, and it quickly became a cornerstone for environmental protection in the United States. NEPA doesn’t establish limits for harm—it is a “process” statute requiring federal agencies to identify planned actions that may significantly affect the environment and to describe those impacts in detail, for both the project as proposed and for a range of alternatives. Federal agencies must then state which action they will take, and which measures they’ll implement to mitigate the impacts.

But NEPA has long been a cumbersome process. The law and its amendments call for brevity in words and time, but the collective parts of an environmental impact statement (EIS) can run hundreds or even thousands of pages long and take more than two years to prepare—often by outside firms. Neither the environment nor the participants in the process benefit from that excess—decision-makers rarely even read the EIS.

It’s time for a dramatic change in the way that federal environmental review is carried out. The emergence of AI creates a tool to make that change a reality.

AI can streamline government processes

The Bureau of Ocean Energy Management (BOEM), where I have served, launched an effort in this direction in February 2020 during the last year of the first Trump administration.

BOEM’s initial idea was simple: EISs and other environmental documents were being created anew by the agency for each proposed action. Some parts of those documents were unique to the action involved, but much of the information, such as a required description of the affected environment, was largely identical for activities in the same geographical area. BOEM realized that an information base kept updated by agency scientists would save staff from unnecessary, repetitive review and speed things up.

BOEM named its initiative Status of the Outer Continental Shelf (SOCS) reflecting the agency’s jurisdiction. It began by compiling environmental documents prepared and vetted by the agency over the years and initiating a study to develop a model for decision-making using that information base. The model would not take humans out of decisions but instead provide them with objective indices of impacts on the environment based on defined categories of concern, such as the presence of endangered species and importance to tribal culture.

SOCS is underway now in BOEM, and its potential is made dramatically more significant with the emergence of generative AI.

Here is the concept: couple the SOCS information and model with generative AI and then fine-tune a custom AI tool for BOEM that can prepare EISs and other environmental documents. On top of that, use AI to facilitate public engagement faster and better than is currently done by providing a way for anyone to ask questions directly to the AI tool about projects and NEPA documents.

This concept can work for any federal agency making decisions with environmental impacts, not just BOEM.

How AI can fix NEPA

That said, one approach for developing a new AI-based tool could follow these steps:

  • Upload contextual documents, including NEPA, other environmental laws and regulations, plus guidance documents and judicial decisions—the more, the better. Include EISs that are exemplary documents so the AI tool can learn what an EIS should look like—that is, it should communicate key issues concisely, clearly, with supporting graphics, focus candidly on important issues, and specify clear and enforceable mitigation measures (as conditions of approval).
  • Have the AI produce an EIS template drawing from these uploads and integrating a decision-making model if an effective one becomes available—something DOGE should include in its NEPA-related efforts. A good model should transparently address the full range of impacts of greatest concern. It also needs to be user friendly for agency staff who are not modelers themselves.
  • Task the AI tool to prompt the human team with requests for information specific to the EIS-proposed action.
  • Fine-tune the AI tool through iterative refinement. This would include human experts systematically reviewing, correcting, and updating AI-generated output, since generative AI models can “hallucinate” facts that require fixing. The review should also look hard for and correct model bias—such as the Google Gemini AI model which, when asked for images of the Founding Fathers, only came up with people of color.
  • Have the human experts closely review completed draft EISs for accuracy and quality. This task should become easier over time as reviewers gain experience.
  • AI tools can also enormously improve public engagement with EISs. Google’s NotebookLM is one option currently available for free. Users can upload an EIS (or any other document) and ask questions about it. The answers are reliable and the tool can even generate an engaging podcast.
  • Eventually, it may become possible simply to task an AI agent to produce a draft EIS, making sure it can access information specific to the project concerned.

NEPA is fixable

So, why aren’t EISs being prepared this way now? It’s partly because generative AI is still novel and government is slow to change. NEPA itself is not an obstacle. The statute and its regulations provide flexibility for how an EIS should be drafted.

To be sure, agency lawyers will wring their hands about what courts may do with AI, but that’s not a good reason to hold back. With rescission of the Chevron doctrine by the Supreme Court, which eliminated deference to agencies by judges, predicting judicial outcomes is impossible, and NEPA can be amended if warranted.

Government information technology (IT) policies are perhaps an even greater inhibition for AI innovation than nervous lawyers. IT requirements, some of which are legislated, are necessary for system security. But the process of change allowed under them can be suffocating and lead agency program staff to avoid innovation.

These organizational inhibitions make improving environmental review under NEPA a strong candidate for prioritization for the Department of Government Efficiency envisioned under the second Trump administration.

DOGE, which aims to bring in technology-focused staff from outside of government, working with the White House Council on Environmental Quality on the inside, could deliver a needed shake-up. It could bring the NEPA process into the 21st century. That would mean a more efficient path to renewable energy growth and the quest for net-zero carbon emissions, while creating a better understanding of the adverse environmental impacts of projects.

Go for this one, DOGE; it’s waiting for you in plain sight.

William Yancey Brown is a nonresident senior fellow at the Atlantic Council Global Energy Center. From 2013–2024, Brown was the chief environmental officer of the Bureau of Ocean Energy Management in the US Department of the Interior, where he oversaw the implementation of NEPA.

MEET THE AUTHOR

RELATED CONTENT

OUR WORK

The Global Energy Center develops and promotes pragmatic and nonpartisan policy solutions designed to advance global energy security, enhance economic opportunity, and accelerate pathways to net-zero emissions.

The post DOGE should use AI to fix environmental review appeared first on Atlantic Council.

]]>
Trump should keep, not cut, Biden’s last-minute offer of federal land for AI data centers https://www.atlanticcouncil.org/blogs/new-atlanticist/trump-should-keep-not-cut-bidens-last-minute-offer-of-federal-land-for-ai-data-centers/ Thu, 23 Jan 2025 16:05:07 +0000 https://www.atlanticcouncil.org/?p=819506 In an executive order in the final days of his administration, US President Joe Biden put forward a promising idea to expand US data center capacity.

The post Trump should keep, not cut, Biden’s last-minute offer of federal land for AI data centers appeared first on Atlantic Council.

]]>
As Washington adjusts to a new administration, tech policy wonks are watching for indications of how the White House seeks to shape critical and emerging technologies. So far, the Trump administration has already repealed nearly eighty executive orders from former US President Joe Biden, including his 2023 order on safe and trustworthy artificial intelligence (AI). But even as the new president resets US AI policy, there is one executive order from Biden’s final days in office that the Trump administration should pause to consider thoroughly before cutting.

On January 14, the Biden administration published an important step forward for US technology policy with its Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. The order seeks to dramatically expand US data center capacity and, as the title says, advance US leadership in AI. However, its more significant effects would build out the energy capacity needed to power the new centers and shape the development and applications of the frontier AI models run within them. These potentially transformative outcomes, which would help ensure that the United States remains at the forefront of AI development, justify the order’s survival under President Donald Trump.

What is in the order?

Biden’s order establishes a clear, if ambitious timeline for the solicitation, construction, and operation of new high-capacity data centers on federal lands. The timeline states that:

  • by February 28, 2025, the US Department of Energy and the US Department of Defense must each identify three sites on land they control suitable for leasing to private entities for the construction and operation of frontier AI data centers;
  • by March 31, 2025, both departments must administer public solicitations for nonfederal proposals to lease and operate data centers on the identified sites;
  • by December 31, 2025, the departments should fully permit and approve work to construct a frontier AI data center on each identified site;
  • by January 1, 2026, the selected proposals must begin construction of the approved data centers; and
  • by December 31, 2027, each data center must reach full operational capacity.

Proposals for these sites must meet a host of requirements, from cybersecurity stipulations to necessary labor standards, and federal agencies face a complicated task of designing and carrying out the solicitation and approval process. Still, the order boldly moves US AI infrastructure in the right direction, as the proliferation of AI models could triple demand for data center capacity by 2030.

How will they be powered?

While data centers have historically been integrated into existing energy grids, the electricity requirements for AI data centers far exceed those built in the 2010s, with some estimates expecting a 160 percent increase in data center electricity demand by 2030. Taxpayers and private electricity consumers may end up footing much of this bill if new facilities crop up without increased energy capacity, as companies leverage tax incentives for their projects and households compete with data centers for increasingly limited supply.

To mitigate these negative effects on electricity networks and private consumers, the Biden administration included provisions for additional energy generation to power the new data centers, with a particular emphasis on renewables and clean energy. The order includes requirements for the Bureau of Land Management (BLM), in partnership with other federal agencies, to identify additional sites well poised for leasing to private entities for the construction and operation of new renewable energy production.

With an eye to the near and distant future, the order specifically requires the BLM and Department of Energy to designate at least five regions to be managed as Priority Geothermal Zones (PGZs). These sites will be selected based on their potential for geothermal power general and thermal storage, and the Department of the Interior is charged with streamlining and advancing direct-use leasing of geothermal projects on BLM land. Such forward thinking could kickstart the US geothermal economy, which according to one estimate could provide 8.5 percent of all US electricity generation by 2050.

Who can access data at the new facilities?

Access to prime real estate and priority energy sources offer enticing incentives for AI companies, but the barriers to entry in building and operating such massive data centers threaten to benefit only the most well-resourced private entities, cementing their leadership in the market. To address such concerns, the order also includes several requirements that acknowledge the importance of small business innovation in the industry.

First, at least one of the selected data center projects must be developed and submitted by a consortium of two or more small- or medium-sized organizations. Such a project could be daunting to newer market participants, but the departments of Energy and Defense are instructed to assist smaller entities in their proposals. Moreover, collective ownership could cultivate a more vibrant and competitive AI ecosystem less dominated by the current leading players.

Furthermore, even sites built and run by industry leaders must take steps to preserve a more open and collaborative environment. The new executive order directs that any computational resources unused for frontier AI training should be made available for commercial use by startups and small firms to improve interoperability and data accessibility.

Finally, the opportunity to build capacity does come with a requirement for selected entities to partner with government AI safety experts and the national security enterprise. New data center owners and operators are obligated to work with the Department of Commerce, in close coordination with the AI Safety Institute, on collaborative research and evaluations of frontier models. They must also advance the use of AI for national security, military preparedness, and intelligence operations. Such steps align with the previous administration’s priorities on AI safety while expanding capacity for US leadership in frontier AI models.

What is the future of the executive order?

While the order survived day one of the new administration, that does not guarantee its existence in the months and years ahead. However, several elements of this order illuminate a clear through-line with priorities of the first Trump administration while appealing to some potential new objectives. 

In February 2019, the Trump team released the first Executive Order on Maintaining American Leadership in Artificial Intelligence, with remarkably similar priorities to the recent order. In addition to calling for public-private partnerships and lowering barriers to entry, the Trump administration called for increased access to federal data and computational power while protecting national security, civil liberties, privacy, and US values. The 2019 order was certainly less focused on clean energy sources, but many intended outcomes echo those of today.

Interestingly, the 2025 order’s specific callout for geothermal energy sources may also be an attempt at ensuring its survival. Trump’s nominee for secretary of energy, Chris Wright, has roots in the fossil fuel industry but has explicitly identified geothermal energy as a promising resource worth exploring. The provision for PGZs may be the lifeline this order needs to survive the transition.

The new administration may certainly amend the order to reflect a reduced focus on AI safety and certain clean energy technologies. However, Biden’s actions to advance US AI leadership while establishing permissive conditions for a transformed energy sector justify its continued existence and impact under Trump.


Will LaRivee is a resident fellow at the Atlantic Council GeoTech Center.

The views expressed in this article represent the personal views of the author and are not necessarily the views of the Department of Defense, the Department of the Air Force, or any other US government agency.

The post Trump should keep, not cut, Biden’s last-minute offer of federal land for AI data centers appeared first on Atlantic Council.

]]>
Aging populations are being ignored in global tech agreements. That comes at a cost. https://www.atlanticcouncil.org/blogs/geotech-cues/aging-populations-are-being-ignored-in-global-tech-agreements-that-comes-at-a-cost/ Tue, 21 Jan 2025 13:14:30 +0000 https://www.atlanticcouncil.org/?p=817939 The omission of aging populations in agreements at the UN and elsewhere is a missed opportunity to harness societal and economic gains.

The post Aging populations are being ignored in global tech agreements. That comes at a cost. appeared first on Atlantic Council.

]]>
Widespread dissemination and application of emerging technologies, such as generative artificial intelligence (AI), will bring revolutionary changes to the technological and societal landscapes. The expected changes have spurred global consensus across sectors, including government, civil society, and the private sector, on the need to rapidly ensure that safeguards are in place around these technologies.

In recent years, this rare unity in the global diplomatic and governance architecture has resulted in a number of global agreements. Those include the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence released in 2021, the March 2024 United Nations resolution on “Seizing the opportunities of safe, secure, and trustworthy artificial intelligence systems for sustainable development,” and most recently, the September 2024 outcome document on the “Pact for the Future, Global Digital Compact and Declaration on Future Generations” from the first United Nations Summit of the Future, a meeting convened before the main high-level ministerial meeting of the General Assembly.

While these consensus documents recommend adopting a human rights approach to the design, development, and deployment of technologies, and emphasize the importance of inclusion and fairness regarding specific population groups, mention of the needs and support of older adults, older women, or aging populations in general is scarce. As one example, in the outcome documents from the Summit of the Future, “ageing populations” are mentioned just once in fifty-six pages of text—and not in reference to emerging technologies. This was manifestly evident at the Summit of the Future, where areas of focus included: (1) reforms to global governance and challenges associated with multilateralism in peace and security, (2) efforts to facilitate ideas of inclusive innovation to address the digital divide, and (3) goals to foster an efficient sustainable global system for both youth and future generations.

The omission of aging populations ignores shifts in population dynamics and neglects the enormous societal and economic contributions that can be harnessed if emerging technologies are designed to meet the needs of aging populations.

Unpacking the global megatrend

Across a range of development levels and geographic locations, societies around the world are seeing their populations getting older—a demographic shift often described as population aging. What was once a trend primarily found in highly developed economies is now—thanks to advances in twenty-first century scientific research, medical discovery, technological innovation, and development progress—true everywhere: average life expectancy has increased across the globe. Available data underscores just how rapid this demographic shift is. In 2020, the number of individuals aged sixty years and older outnumbered children younger than five for the first time in history. The World Health Organization projects that approximately 1.4 billion people will be aged at least sixty years old by the end of this decade.

The pace of demographic change, however, is uneven and largely driven by the Global South and low- and middle-income countries (LMICs). Specifically, a 2022 AARP and Economist Impact report (to which one of the authors contributed) reveals that, by 2050, the greatest rates of growth for adults age sixty-five years and older will be in LMICs, especially Sub-Saharan African nations. Meanwhile, Asian countries will contribute over 70 percent of the global increase in aging populations. Moreover, growth in this population segment is projected to be 2.5 times greater in LMICs than that experienced by high-income countries (HICs) that are already experiencing the societal and economic transformations brought forth by these shifts. Fortunately, in an effort to fully address and harness the benefits from aging populations, HICs are “mainstreaming aging” via the development and implementation of Action Plans on Aging, which serve as multi-year, whole-of-government roadmaps wherein aging is embedded into policies and programs across government. Importantly, as it relates to technology and its ubiquity in daily life, many of these nationwide plans—from Singapore’s Action Plan for Successful Ageing 2023 to New Zealand’s Better Later Life—recognize the importance of digital inclusion achieved via programs, policies, and services to improve the quality of life and social inclusion of older adults.

Accompanying these shifts to older populations is a well-documented range of implications for how society functions and operates. These include (1) recommendations on how cities can be reimagined to address the global megatrends of population aging and urbanization, (2) changes to the makeup and composition of the workforce, and (3) shifting family dynamics resulting in higher numbers of informal caregivers and individuals requiring care. Acknowledging and, in turn, harnessing these societal changes is vitally important in light of analyses revealing that the fifty-and-over population contributed close to 34 percent (or $45 trillion) of global gross domestic product in 2020.

The acceleration of these global demographic shifts is coming amid the rapid proliferation of emerging technologies such as generative AI, which have profound potential to support older adults. Possible benefits include boosting societal and economic participation and enhancing quality of life—if universal standards and principles ensure the inclusion of older adults’ needs in the design, development, and dissemination of these technologies.

Why these global instruments matter for aging populations

While these instruments devote little time to aging populations, closer inspection reveals that core principles around the use and dissemination of emerging technologies hold great promise for older adults, including:

  • Inclusive design and accessibility: Due to varying levels of digital literacy, sensory abilities, and mobility, it is essential to ensure the design of emerging technologies accounts for the needs of older adults. It is crucial to consider and incorporate accessible interfaces, simplicity of controls, and personalized options that ensure the accommodation of individual preferences and limitations.
  • Digital literacy initiatives and aging populations: Putting aging populations at the center of digital literacy initiatives will be necessary to bridge the gap between development of fast-paced AI technologies and their use by older people, especially in underserved areas.
  • Data protection and security for older adults: Relatively lower levels of technological sophistication, paired with empirical evidence that older adults are at higher risk for scams and frauds, mean that emerging technologies must take extra care to ensure the security of aging populations. Protections and security features should be user-friendly and easily understood for a spectrum of users across both age and digital literacy.

Cementing the ethos of “trustworthy AI”

The global aging population will comprise an increasing share of the consumer base for emerging technologies in the decades ahead. Meeting their needs through inclusive design and other considerations should be viewed as a business imperative by those bringing these technologies to market.

At the same time, inclusion and accessibility—with clear benefits for groups that today are on the wrong side of the digital divide—can also help address skepticism around these advances, particularly in the realm of AI. Older adults are often overlooked and marginalized in society due to ageism, or through misperceptions about their level of technological sophistication. Technology-based efforts to curb this marginalization can show how incorporating principles of trustworthy AI can enhance social inclusion, advance human rights, and harness the potential of human capital across the course of life. This will help foster an equitable digital economy in line with AI global guidelines about justice and balance and should be an essential factor of sustainable AI integration.

Fixing policymakers’ blind spot

Despite broad awareness of the demographic shift leading to aging societies globally, recent global instruments on emerging technologies rarely, if at all, acknowledge older people, neglecting an important and growing segment of the population.

This is a glaring blind spot among policymakers given the current and future economic and societal contributions of older adults. It also overlooks the possibilities for enhancing those contributions with the implementation of responsible, inclusive, and ethical emerging technologies such as generative AI.

The global megatrend of population aging is happening in parallel with groundbreaking innovations that will change our societies and economies. It is vital that future global consensus documents on these emerging technologies include the perspectives and voices of older adults.


Vijeth Iyengar is a nonresident senior fellow with the Atlantic Council’s GeoTech Center. The views reflected in the article are the author’s views and do not necessarily reflect the views of his employer.

Gunay Kazimzade is a PhD. researcher at Technical University of Berlin, a senior artificial intelligence consultant at Mercedes-Benz Consulting GmbH, and a member of the GeoTech Center’s AI Connect program in partnership with the US Department of State. The views reflected in the article are the author’s views and do not necessarily reflect the views of her employer.

Further Reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Aging populations are being ignored in global tech agreements. That comes at a cost. appeared first on Atlantic Council.

]]>
Rodriguez and Geurts promote the need for rapid adoption of new defense software on Building the Base podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/rodriguez-and-geurts-promote-rapid-adoption-of-defense-software-on-podcast/ Mon, 20 Jan 2025 17:00:00 +0000 https://www.atlanticcouncil.org/?p=820112 On January 20, Stephen Rodriguez, senior advisor at Forward Defense and director of FD's Commission on Software-Defined Warfare was a featured guest on a podcast hosted by Hondo Geurts, commissioner on the Commission on Software-Defined Warfare.

The post Rodriguez and Geurts promote the need for rapid adoption of new defense software on Building the Base podcast appeared first on Atlantic Council.

]]>

On January 20, Stephen Rodriguez, senior advisor at Forward Defense and director of FD’s Commission on Software-Defined Warfare was a featured guest on the podcast Building the Base, hosted by Hondo Geurts, commissioner on the Commission on Software-Defined Warfare. The episode, entitled “Looking Ahead: National Security in a New Administration with Nadia Schadlow and Stephen Rodriguez,” focused on the need for the Department of Defense to accelerate pathways to adoption of cutting-edge technologies, the crafting of an effective National Security Strategy, and the potential benefits of utilizing innovative technologies to reform the department. The Commission on Software-Defined Warfare’s upcoming final report was highlighted.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Rodriguez and Geurts promote the need for rapid adoption of new defense software on Building the Base podcast appeared first on Atlantic Council.

]]>
Lord and Sweatt advocate for rapid adoption of cutting-edge defense software in DefenseNews https://www.atlanticcouncil.org/insight-impact/in-the-news/lord-and-sweatt-advocate-for-adoption-of-cutting-edge-defense-software-defensenews/ Mon, 16 Dec 2024 17:00:00 +0000 https://www.atlanticcouncil.org/?p=820282 On December 16, Ellen Lord and Tyler Sweatt of Forward Defense's Commission on Software-Defined Warfare published an article in DefenseNews on how the US Defense Department and its allies ought to approach software-defined warfare.

The post Lord and Sweatt advocate for rapid adoption of cutting-edge defense software in DefenseNews appeared first on Atlantic Council.

]]>

On December 16, Ellen Lord and Tyler Sweatt of Forward Defense‘s Commission on Software-Defined Warfare published an article in DefenseNews on how the US Defense Department and its allies ought to approach software-defined warfare and the urgent need for rapid adoption and delivery of cutting-edge defense software to empower the warfighter. They promoted the work of the Commission on Software-Defined Warfare and its forthcoming final report.

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

Forward Defense’s Commission on Software-Defined Warfare aims to digitally transform the armed forces for success in future battlefields. Comprised of a distinguished group of subject-matter and industry commissioners, the Commission has developed a framework to enhance US and allied forces through emergent digital capabilities.

The post Lord and Sweatt advocate for rapid adoption of cutting-edge defense software in DefenseNews appeared first on Atlantic Council.

]]>
Advancing US national security through Middle East AI negotiations  https://www.atlanticcouncil.org/blogs/menasource/advancing-us-national-security-through-ai-mena/ Mon, 09 Dec 2024 16:58:26 +0000 https://www.atlanticcouncil.org/?p=812489 Leading negotiations around global AI standards can help the United States manage security risks and ensure that emerging frameworks align with its values and interests.

The post Advancing US national security through Middle East AI negotiations  appeared first on Atlantic Council.

]]>
The rise of artificial intelligence (AI) has profound implications for economic growth, security, and governance. As global AI dialogues progress, the incoming Trump administration can play a decisive role in shaping these discussions. Leading negotiations around global AI standards can help the United States manage security risks and ensure that emerging frameworks align with its values and interests. Falling behind in these discussions poses the risk of ceding leadership to competitors—particularly China, which is eager to influence the global AI landscape.

Discussions about advanced AI governance tend to focus on the United States and China. Leading AI developers such as OpenAI, Google, Anthropic, Meta, Microsoft, and xAI are all based in the United States. Although China has lagged behind the United States in frontier AI progress, its status as a global economic superpower makes it a natural part of the conversation. The United Kingdom also deserves mention for its role in hosting the world’s first AI Safety Summit and establishing the world’s most well-resourced AI Safety Institute. While AI security discussions often focus on these nations, emerging trends suggest that another region could become important for AI global security: the Middle East. 

Middle East investments in AI 

The Middle East has rapidly embraced AI, not just in research or consumer applications but by investing in the critical infrastructure that powers AI data centers. Countries such as Saudi Arabia, the United Arab Emirates (UAE), and Israel have recognized the strategic importance of AI and are dedicating significant resources to building the advanced data centers and hardware ecosystems required for its development. 

SIGN UP FOR THIS WEEK IN THE MIDEAST NEWSLETTER

Both Saudi Arabia and the UAE plan to double their data center capacity over the next few years, and these plans have been accompanied by more than $100 billion in funding for work relating to semiconductors, AI, and related fields. TheBusiness Times estimates that AI will contribute $96 billion to the UAE’s economy and $135 billion to Saudi Arabia’s. The UAE has also invested in AI developers, such as G42, which recently struck a $1.5-billion deal with Microsoft, and the Technology Innovation Institute, which recently open-sourced impressive models via its Falcon series. Israel, a recognized leader in technology, has similarly invested in high-powered data centers, partnering with companies such as Dell and NVIDIA to push the boundaries of what this technology can achieve.

Data center security: a national security priority

Possessing powerful data centers confers more than just economic benefits; it also brings significant responsibilities. Nations that host leading data centers will wield disproportionate influence over global AI governance dialogues. Although the United States dominates the global data center market, most of those data centers cannot be effectively used for advanced AI development. As the computational workloads needed to support advanced AI development and inference grow exponentially, the most advanced forms of AI will require new data centers. Thus, as Saudi Arabia, the UAE, and Israel invest more in cutting-edge infrastructure, they might play an increasingly significant role in shaping the global AI ecosystem. This makes it even more essential for the United States to deepen its engagement with these nations to develop shared standards and norms.

Data center security presents another critical challenge. Recent research, including a report by RAND, has highlighted the importance of protecting model weights—the parameters that encode the capabilities of an AI system. If adversaries steal these weights, they could fine-tune systems for malicious purposes or accelerate their AI research, exacerbating an already dangerous AI race. Model weights can be stolen during either the training stage (in which weights are actively updated and stored) or the inference stage (in which finalized weights could be accessed through side-channel attacks).Nations hosting powerful data centers must implement robust safeguards to protect against internal and external threats. A failure to secure these centers could lead to catastrophic consequences if malicious actors gain access to sensitive AI systems.  

An opportunity to shape regional AI security discussions

While the benefits of increasing US-Middle East AI cooperation are compelling, developing a constructive AI dialogue with leading players in the region comes with hurdles. Efforts to establish meaningful cooperation are complicated by the absence of formal diplomatic relations between Israel and Saudi Arabia, heightened regional tensions, and competition with China over influence in the Middle East and North Africa. Despite these challenges, the urgency of advancing appropriate AI security standards should compel the United States to act. With its track record for bold and creative diplomacy in the region, the incoming Trump administration will be well-equipped to lead these AI dialogues.   

One potential strategy the Trump administration could consider is adopting a phased approach that begins with a trilateral dialogue involving the United States, the UAE, and Israel. Both the UAE and Israel have strong ties to the United States and are well-positioned to collaborate and share best practices on issues like data center security, model weight protection, and the development of AI security standards. These early discussions could focus on aligning standards for AI infrastructure security and exploring cooperative research opportunities to mitigate risks. Nations with emerging AI industries, such as India, could also be meaningfully included in these dialogues, either through existing cooperation mechanisms like I2U2 or a new structure developed by the Trump administration.  

In the longer term, these dialogues could expand to include Saudi Arabia and other countries in the region. While formal diplomatic relations between Israel and Saudi Arabia remain elusive, efforts toward normalization could create opportunities for discussion. Even without formal agreements, quiet diplomatic efforts could pave the way for Saudi participation. 

Ultimately, this cooperation could evolve into a broader regional framework for AI and global security. This would allow the United States and its Middle East partners to present a more cohesive front in global AI discussions, countering the influence of adversaries. Aligning AI security standards would also strengthen US alliances with regional players, fostering economic and technological interdependence at a time when competition with China over AI dominance is intensifying.  

As the world’s leader in AI innovation, the United States can capitalize on its position of strength to shape the global conversation. By adopting a phased approach that begins with trilateral collaboration and gradually expands to include broader regional participation, the United States can lead the way in shaping a safer AI future that aligns with its strategic interests. This is not just an opportunity but an imperative, as the decisions made today could fundamentally shape the trajectory of AI development.  

Akash Wasil is a senior research associate at the Center for International Governance Innovation (CIGI), specializing in the intersection of AI and national security. Prior to his focus on AI policy, he was a National Science Foundation-funded PhD student at the University of Pennsylvania, where he researched innovative applications of technology and machine learning in mental healthcare. Akash earned his BA from Harvard University and graduated Phi Beta Kappa.

The post Advancing US national security through Middle East AI negotiations  appeared first on Atlantic Council.

]]>
Air force for hire https://www.atlanticcouncil.org/commentary/podcast/air-force-for-hire/ Tue, 03 Dec 2024 17:03:47 +0000 https://www.atlanticcouncil.org/?p=810931 Host Alia Brahimi chats with mercenaries expert Alessandro Arduino, a top China analyst.

The post Air force for hire appeared first on Atlantic Council.

]]>

In Season 2, Episode 7 of the Guns for Hire podcast, host Alia Brahimi chats with mercenaries expert Alessandro Arduino, who is also a top China analyst. They discuss recent seismic leaps in Unmanned Aerial Vehicle (UAV) technology and how the cost of drone defense is a magnitude greater than drone offense.

They explore how certain aggressive PMCs are marrying drone capabilities with their mercenary offerings, raising the specter of air forces for hire. Arduino describes a near future where autonomous drones run by AI systems remove humans from the decision-making loop. He also talks us through China’s developing thinking around privatized force, with some in China now pushing for more forceful security around the Belt-and-Road Initiative and the Chinese nationals constructing it. 

“We have already boots on the ground, meaning an army for hire… So the next step will be to have an air force for hire. Of course, sometimes reality is faster than fiction.”

Alessandro Arduino, mercenary expert and China analyst

Find the Guns For Hire podcast on the app of your choice

About the podcast

Guns for Hire podcast is a production of the Atlantic Council’s North Africa Initiative. Taking Libya as its starting point, it explores the causes and implications of the growing use of mercenaries in armed conflict.

The podcast features guests from many walks of life, from ethicists and historians to former mercenary fighters. It seeks to understand what the normalization of contract warfare tells us about the world we currently live in, the future of the international system, and what war could look like in the coming decades.

Further reading

Through our Rafik Hariri Center for the Middle East, the Atlantic Council works with allies and partners in Europe and the wider Middle East to protect US interests, build peace and security, and unlock the human potential of the region.

The post Air force for hire appeared first on Atlantic Council.

]]>
How NATO learns and adapts to modern warfare https://www.atlanticcouncil.org/content-series/ac-turkey-defense-journal/how-nato-learns-and-adapts-to-modern-warfare/ Tue, 03 Dec 2024 14:00:00 +0000 https://www.atlanticcouncil.org/?p=807268 One of the main strengths of NATO is it's ability to continuously develop and improve based on the lessons learned by the complexities of modern conflicts.

The post How NATO learns and adapts to modern warfare appeared first on Atlantic Council.

]]>
Russia’s illegal annexation of Crimea in 2014 and the full-scale invasion of Ukraine in 2022 have had strategic consequences far beyond the region, showcasing the complexities of modern conflicts, where conventional battles are intertwined with cyber warfare, information operations, and hybrid tactics.

No doubt, Russia’s actions have reshaped the global geopolitical landscape. Yet NATO’s capability to adapt has been central and the basis for its sustained relevance and success as an alliance since its founding in 1949. And now, seventy-five years later, NATO continues to lead in learning and evolving to address emerging challenges in the future operating environment.

As with past conflicts and Russia’s evolving war against Ukraine, NATO’s mechanisms for lessons learned and transformation serve as a critical means to adapt and prepare the Alliance to counter every aggression in the future.

But how does NATO, with thirty-two member nations, learn lessons? While NATO’s internal learning process is informed by its members and their own experiences, the situation in Ukraine now demands the ability to learn lessons from others’ experiences. In short, this external learning process is achieved by Alliance-wide lessons sharing and collecting through a dedicated NATO lessons-learned portal. These national observations and experiences are collected, evaluated, consolidated, and then transformed into actions to be applied in NATO’s activities to transform, adapt, and prepare for the future.

The organization’s military learning and adaptation process is strategically led by Allied Command Transformation (ACT) in the United States in Norfolk, Virginia, with a dedicated subordinate command as the Alliance’s center for enabling and supporting the NATO lessons-learned policy and capability: the Joint Analysis and Lessons Learned Centre (JALLC) in Lisbon, Portugal. By systematically collecting reports from open sources, partners, and allies, and sharing them in the NATO lessons-learned portal, all member nations can benefit. A dedicated analysis team gleans insights from the vast amount of data to enhance NATO’s understanding of Russia’s war against Ukraine, and thus, where applicable, inform and influence the development of new strategies, doctrines, and training programs. Recently, JALLC is also benefiting from inputs delivered by a Ukrainian nongovernmental organization focused on analysis and training.

NATO’s decision to establish the NATO-Ukraine Joint Analysis Training and Evaluation Centre (JATEC) will soon play another crucial role in ensuring that NATO remains informed, agile, adaptable, and effective in addressing contemporary and future security challenges. JATEC thus represents a significant commitment by allies not only to improve the interoperability and effectiveness of Ukrainian forces but also to enhance the Alliance’s capability by learning and applying lessons.

The lessons-learned process is also supported by various national NATO-accredited Centres of Excellence (COE). These COEs, under the coordinating authority of ACT, specialize in various military areas of expertise, such as cyber defense, command and control, air power, medical support, etc.

Altogether, ACT with the JALLC in its overarching role, the contributions by the nations, and the NATO-accredited COEs with their specializations, create a comprehensive system for ensuring lessons are captured and disseminated to operational forces, fostering a culture of continuous improvement within NATO.

The basis of a successful alliance is a common understanding and principles, which are laid out in doctrines. Therefore, doctrine development is a critical component of NATO’s adaptation and transformation process. By continuously updating doctrine based on real-world experiences and lessons learned, NATO ensures that its operational principles remain robust and effective in the face of evolving threats. With regard to Russia’s war in Ukraine, Russia’s use of hybrid warfare tactics, which combine conventional military force with irregular tactics, and cyber and information operations, has prompted improvements in NATO doctrine governing how NATO shares intelligence and counters disinformation campaigns to strengthen NATO’s response toward hybrid warfare tactics.

Furthermore, lessons from Russia’s war against Ukraine underscore the importance of agile, integrated command and control systems capable of coordinating operations across multiple domains: land, sea, air, cyber, and space. NATO needs command and control structures that are flexible, resilient, and capable of rapid decision-making. Advanced technologies such as artificial intelligence and machine learning are being leveraged to enhance shared situational awareness and streamline decision-making processes to maintain an advantage.

Lessons learned will be injected into NATO exercises and training to generate high-fidelity training scenarios allowing NATO forces to “train as they fight.” Besides improving interoperability, certifying NATO forces, and demonstrating NATO’s fighting credibility, NATO exercises also challenge training audiences to face operational dilemmas that reflect the complexities of modern warfare. JALLC reports summarizing lessons from the war in Ukraine are being used by the Joint Force Training Centre (JFTC) and Joint Warfare Centre (JWC) to update and improve NATO exercises. The increased use of drones, private-sector support for military operations, the battle for both cognitive and information superiority, sustainment, and civilian resilience are key features, which have already informed changes in NATO exercises to ensure that NATO forces are better prepared to operate in complex and dynamic environments.

ACT, as the strategic warfare development headquarters, also looks into the future. Studies focus on widely debated topics including, for example, the future operating environment and the future force structure. Other topics include the future of tanks and attack helicopters, small-drone warfare, vulnerabilities of fleets and ports to maritime drones, and the protection of critical infrastructures against long-range strikes.

NATO’s commitment and ability to continuously develop and improve ensures the Alliance’s enduring strength and cohesion. NATO is rapidly incorporating battlefield lessons into the transformation, adaptation, and preparation activities of the Alliance’s forces. ACT is key to this process, ensuring lessons reach operational forces at the speed of relevance.


General Chris Badia is NATO’s Deputy Supreme Allied Commander Transformation.

Explore other issues

The Atlantic Council Turkey Program aims to promote and strengthen transatlantic engagement with the region by providing a high-level forum and pursuing programming to address the most important issues on energy, economics, security, and defense.

The post How NATO learns and adapts to modern warfare appeared first on Atlantic Council.

]]>
Michael Groen writes an op-ed about securing AI labs in Real Clear Defense https://www.atlanticcouncil.org/insight-impact/in-the-news/michael-groen-writes-an-op-ed-about-securing-ai-labs-in-real-clear-defense/ Mon, 25 Nov 2024 14:41:25 +0000 https://www.atlanticcouncil.org/?p=809091 On November 21, Michael Groen, non-resident senior fellow at Forward Defense, authored an op-ed for Real Clear Defense arguing that the United States must focus on securing AI laboratories to protect its razor-thin advantage in the AI race with China. In his words, to address threats from China’s cyber-espionage and intellectual property theft, “the U.S. government, academia, […]

The post Michael Groen writes an op-ed about securing AI labs in Real Clear Defense appeared first on Atlantic Council.

]]>

On November 21, Michael Groen, non-resident senior fellow at Forward Defense, authored an op-ed for Real Clear Defense arguing that the United States must focus on securing AI laboratories to protect its razor-thin advantage in the AI race with China. In his words, to address threats from China’s cyber-espionage and intellectual property theft, “the U.S. government, academia, and the private sector need to take coordinated action.”

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Michael Groen writes an op-ed about securing AI labs in Real Clear Defense appeared first on Atlantic Council.

]]>
Assessing US-China tech competition in the Global South https://www.atlanticcouncil.org/content-series/strategic-insights-memos/assessing-us-china-tech-competition-in-the-global-south/ Wed, 20 Nov 2024 18:30:00 +0000 https://www.atlanticcouncil.org/?p=807834 Discover how China is attempting to gain influence through technology competition in artificial intelligence (AI).

The post Assessing US-China tech competition in the Global South appeared first on Atlantic Council.

]]>
TO: Policymakers and Technology Policy Strategists

FROM: Hanna Dohmen

DATE: November 20, 2024

SUBJECT: Assessing US-China tech competition in the Global South

In July 2024, the Atlantic Council’s Global China Hub and Scowcroft Center for Strategy and Security convened experts and policymakers in a private workshop held under the Chatham House rule to discuss US-China technology competition in the Global South. This memo draws from insights gathered during the workshop to give policymakers a deeper understanding of how China is attempting to gain influence through technology competition in artificial intelligence (AI) in the regions and countries of the Global South and the opportunities for the United States to engage with the Global South.

Strategic context

AI presents significant economic development opportunities for countries in the Global South. Countries in Africa, Southeast Asia, Latin America, and elsewhere have already begun to capitalize on the opportunities presented by AI applications to help advance progress in critical industries like agriculture, education, and healthcare. A United Nations Economic Commission for Africa report projects that AI has the potential to expand Africa’s economy by USD 1.5 trillion. That figure is half of Africa’s gross domestic product (GDP) today and is projected to be realized if Africa can capture 10 percent of the global AI market by 2030.

Given its advantages in AI infrastructure and applications, the United States currently has an opportunity to leverage its private-sector leadership and its diplomatic presence to help meet the needs of the developing world. It is important, workshop participants emphasized, for the United States to utilize its tech diplomacy to build strong and sustained connections with countries in the Global South. Moreover, participants emphasized that the United States must pursue policies and objectives that will help bridge the digital divide, not exacerbate it.

Yet China’s ambitions and actions in the Global South introduce challenges. The United States and China are competing not only for who will lead in AI innovation but also whose values will guide AI applications around the world. In the AI context, China is using the same playbook that it has used in other technology areas. Namely, it is employing a multifaceted approach to promoting AI development in the Global South, ultimately to its own economic and diplomatic advantage. Through initiatives like the Digital Silk Road, China is providing substantial investments in technology and infrastructure projects. Chinese technology firms, such as Huawei, ZTE, and SenseTime, have provided tens to hundreds of millions of dollars in financing and investment for various digital infrastructure projects, including fiber optic cables, hardware equipment procurement, surveillance cameras, and AI applications for public-sector digitization. China is also promoting its AI governance model in international forums, including within the United Nations (UN), the Group of Seventy-Seven (G77), and the BRICS grouping, in part to undermine Western approaches to digital governance.

Opportunities offered through AI in the Global South

AI holds significant promise for the Global South, offering new solutions for challenges in a wide range of sectors. AI tools already are being adopted across industries in the Global South including agriculture, healthcare, and education. AI applications are addressing challenges such as identifying and detecting crop diseases, enhancing forest monitoring, combating antimicrobial resistance, and enhancing science education.

Scholars suggest that the excitement surrounding AI in the Global South exceeds that in the United States or Europe, primarily because AI is seen as a necessary tool to solve critical development challenges, not just a means of improving economic efficiency. AI’s broad applicability makes it particularly attractive for developing countries, many of which see it as a critical tool for achieving sustainable development goals (SDGs).

The Global South’s enthusiasm offers the United States an opportunity to collaborate on AI for development. Despite China’s growing presence in the AI space, the United States still has most of the world’s leading AI firms and AI applications, which emphasizes the critical need for private-sector investment in and collaboration with countries in the Global South. Moreover, the fact that US companies are leaders in AI differentiates the position that the United States finds itself in today compared with the 5G competition vis-à-vis China. In the 5G competition, China is the global leader, largely due to its aggressive state support, substantial investments in infrastructure, and rapid deployment capabilities that have outpaced US efforts. In the AI arena, the situation is much different.

Competing approaches

There is a stark contrast between China’s approach to fostering AI development in the Global South and that of the United States. These distinctions highlight some key challenges for policymakers.

China’s narrative

China has been actively promoting its vision of AI governance through multilateral institutions like the UN. One mechanism that China is relying on is the G77, the largest coalition of developing countries in the UN. China has been pushing its ideas of AI governance through the G77, BRICS, and other multistakeholder mechanisms—while it refuses to sign global agreements presented at other forums, such as the Seoul AI Safety Summit.

Beijing has sought to shift the focus of AI governance discussions toward capacity-building initiatives, sidelining more robust discussions on ethical standards. Workshop participants noted that China largely decouples the capacity-building conversation from the governance conversation, arguing that governance—as pursued by the West—is an obstacle to development in the Global South because it increases regulatory costs, imposes barriers to entry, and takes the focus away from capacity building. Participants highlighted that China’s approach undermines civil society and focuses too much on capacity building and too little on governance. Moreover, China is positioning itself to shape the global AI ecosystem according to its own terms, which risks undermining international norms and values on privacy, transparency, and accountability.

China is promoting its collaboration with Global South countries through other multilateral mechanisms and summits. For example, in September 2024, China, in collaboration with fifty-three African nations, adopted the Beijing Declaration on Jointly Building an All-Weather China-Africa Community with a Shared Future for the New Era, committing to accelerating technological development and innovation across the continent. This declaration highlights China’s strategy to embed its AI governance model within global dialogues. It outlines an agreement between African nations and China to together adopt measures that emphasize both the development and security of AI and form relevant international governance frameworks through the UN.

Other joint mechanisms promoted by Chinese government agencies have also called for increased cooperation with the Global South. For example, in April 2024, the Cyberspace Administration of China (CAC) pushed for the establishment of a China-Africa AI policy dialogue, making use of the UN to play a more central role in international AI governance, promoting talent exchanges, and increasing collaboration among universities and research institutions.

In addition to its diplomatic efforts, Chinese tech giants like Alibaba are deepening their footprint in countries like Mexico, Malaysia, Thailand, and the Philippines. By building the physical infrastructure and increasing its cloud services offerings, Alibaba aims to increase the adoption of its AI products and services by businesses and governments in these countries. US cloud service providers, such as Microsoft, Amazon, and Google, do the same: One of the questions raised by participants is whether these US companies can compete on price compared with their Chinese counterparts.

Moreover, as Chinese companies integrate telecommunications infrastructure with cloud computing infrastructure and AI applications, participants worry about whether US companies will effectively compete. For example, a Chinese company might control not just the data centers and telecommunications networks, but also the software and AI applications on top of that infrastructure. The concern is that Chinese companies could leverage their existing telecommunications infrastructure in the Global South to push countries to adopt their AI applications. Participants also noted that China is offering the model weights—the numerical values that determine how input data is transformed into predictions—to every country to allow those countries to have the ability to develop AI systems at the frontier on their own, without relying on the AI systems already developed by companies.

The US narrative

Meanwhile, participants noted that while US infrastructure and hardware providers are at the cutting edges of AI application development now, factors such as governance requirements may hinder US competitiveness in the Global South. In contrast to China, the United States is not only offering countries in the Global South the infrastructure, hardware, and models but doing so while emphasizing responsible AI governance and the importance of ethical standards through multilateral stakeholder mechanisms.

The United States has spent a great deal of effort to build a global governance architecture that promotes the safe, secure, and responsible use of AI. The United States asserts that fostering digital sovereignty requires a robust governance framework that ensures safety and accountability in AI applications. As such, the United States has advocated for a governance model based on security and responsibility at international forums such as the G7, the Bletchley Park AI Safety Summit, and the AI Seoul Summit. Through these mechanisms, the United States and other allied nations have developed frameworks and guiding principles for developing safe, secure, and trustworthy AI systems worldwide that seek to prevent applications that undermine democratic values, facilitate terrorism, enable criminal misuse, or post other substantial risks.

Workshop participants also emphasized that while the United States disagrees with China’s approach to AI within the UN system, the US government continues to believe that the UN must play a consequential role in AI governance. Participants noted that China’s actions raise concerns because they promote a centralized regulatory framework that does not align with the values of the United States and its allies. Therefore, the United States promotes and supports multilateral ethical AI frameworks aligned with democratic values like the UN Educational, Scientific and Cultural Organization’s Recommendation on the Ethics of Artificial Intelligence and led and adopted a UN General Assembly resolution in March 2024 on “seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.” Participants also noted that the United States must be proactive in countering China’s growing influence in these global governance conversations, ensuring that AI standards reflect democratic values and respect for human rights. Leading the creation of AI standards with countries in the Global South will be essential for establishing a global framework that aligns with the values of the United States and its partners.

US firms such as Google and Microsoft are at the frontlines of bringing AI infrastructure, models, and applications to the Global South. For example, in May 2024, Microsoft, along with United Arab Emirates-based firm G42, announced a USD 1 billion investment in Kenya that aims to work with local partners to develop local-language AI model development and research and an East Africa Innovation Lab focused on AI skills training, and collaborate with the government of Kenya to support safe and secure services across East Africa.

Participants, however, also raised a concern that US national security objectives are at times at odds with US competitiveness. Specifically, US regulations focused on strengthening national security may inadvertently hinder the nation’s competitiveness in the Global South. For instance, proposed regulations requiring the disclosure of large AI model training using US Infrastructure-as-a-Service (IaaS) providers to the US government could raise privacy concerns and create disadvantages in competing in the Global South compared to China’s approach.

Participants highlighted that collaboration between the United States and the Global South should extend beyond mere technology transfer; it should also encompass the establishment of innovation ecosystems. This approach entails fostering research collaborations, nurturing local talent, and ensuring that AI tools are tailored to local contexts. Ensuring that AI models are culturally and contextually relevant will be crucial for fostering trust and long-term success in AI partnerships. For instance, only 11 percent of global datasets are sourced from Africa, with countries like Egypt contributing a disproportionate share. This lack of representation hampers AI development, as models built on predominantly Western datasets often fail to consider regional nuances and hinder effective integration of AI solutions.

The United States has an opportunity to play a critical role in aiding countries in the Global South to create more representative datasets, thereby enhancing the applicability and reliability of AI systems to local context and needs. By focusing on the individual needs and goals of countries in the Global South, the United States will be better positioned to build trust and long-term partnerships, which will be critical in US-China technology competition.

Metrics to measure competition

One of the key questions discussed in the workshop is who is better positioned to lead in AI in the Global South—the United States or China? Key metrics to understand the state of competition might include:

  1. Investment trends: Analyzing the volume and value of transactions in AI, utilizing platforms like Crunchbase or PitchBook to assess who is investing where and how much. This approach has been used in the past by researchers to better understand China’s domestic and international initiatives to financially support its AI development.
  2. Infrastructure development: Examining the nature and scope of infrastructure projects funded by both the United States and China, particularly in relation to the Digital Silk Road.
  3. Data representation: Assessing the availability and utilization of local datasets for AI model training, especially in underrepresented regions.
  4. Partnership agreements: Evaluating the number and scope of bilateral and multilateral agreements focused on AI between countries in the Global South and either the United States or China. This may include further examining the multilateral stakeholder agreements advocated for by both countries in the United Nations.
  5. Public sentiment: Understanding how citizens in these countries perceive US versus Chinese technologies and what values they prioritize in their AI partnerships.
  6. Pricing performance: Assessing the costs of AI development and deployment offered by both the United States and China.

Conclusion

To effectively engage with the Global South in AI development, the United States should prioritize the creation of enduring partnerships that foster innovation and governance. This engagement must transcend mere transactional relationships and should not solely be viewed through the lens of geopolitical competition with China. Such partnerships should promote inclusive growth, ethical AI governance, and support for countries in harnessing AI for sustainable development.

Navigating the complex trade-offs between national security and competitiveness will be essential for the United States to ensure that its AI tools and standards remain attractive to countries in the Global South. Ultimately, success will hinge on the United States presenting a compelling vision for AI that resonates with the needs of developing nations while upholding values that ensure a fair and inclusive AI future.

About the author

Acknowledgements

The Atlantic Council would like to thank its partner, Tides Foundation, for supporting the Council’s work on this publication.

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

Global China Hub

The Global China Hub researches and devises allied solutions to the global challenges posed by China’s rise, leveraging and amplifying the Atlantic Council’s work on China across its fifteen other programs and centers.

The post Assessing US-China tech competition in the Global South appeared first on Atlantic Council.

]]>
AI safety concerns transcend borders. To meet the challenge, US efforts need to go global. https://www.atlanticcouncil.org/blogs/new-atlanticist/ai-safety-concerns-transcend-borders-to-meet-the-challenge-us-efforts-need-to-go-global/ Fri, 15 Nov 2024 19:45:05 +0000 https://www.atlanticcouncil.org/?p=807074 Will the United States work with partner nations to take the necessary steps at the upcoming International Network of AI Safety Institutes meeting in San Francisco?

The post AI safety concerns transcend borders. To meet the challenge, US efforts need to go global. appeared first on Atlantic Council.

]]>
How will the incoming Trump administration approach artificial intelligence (AI) governance? The answer to that question depends in part on what the outgoing administration does with its remaining time in office.

This month, the US government is moving forward with key international engagements to shape consensus on shared principles of technology development. The US State Department and US Commerce Department will host the first-ever meeting of the International Network of AI Safety Institutes in San Francisco on November 20-21. This event, which aims to shape the future of AI with global cooperation at its core, will assemble technical experts to align priority research topics, increase knowledge sharing, and improve transparency on AI safety efforts in preparation for the Paris AI Action Summit in February 2025.

The need to verify the robustness and assurance of AI systems continues to grow as the technology rapidly proliferates and cases of model failures spread. In two examples, a New York City chatbot recently offered unlawful advice to small business owners, and several major large language models have revealed protected information through novel “jailbreaking” techniques. Developer codes of conduct and guiding principles call for independent evaluations and risk mitigation reporting to help identify and prevent similar problems, but such practices remain uncoordinated and inconsistent across the AI industry.

Since the first AI Safety Summit one year ago at Bletchley Park in the United Kingdom, a surge in global efforts to make AI safer has turned nascent concerns into a powerful, global movement. Today’s network of individuals and institutions working to improve the trustworthiness of new models spans a diverse range of stakeholders with several promising agreements already in place.

Such an ecosystem offers promising potential for safety experts to build awareness of emerging threats and collaborate to reach shared solutions. Yet this emerging network must address some key challenges to establish a sustainable foothold in the global conversation. The AI governance space already includes layers of interconnected institutions with overlapping mandates, while information asymmetries and contrasting motivations across stakeholders makes it challenging to reach consensus, even on broad goals. The International Network of AI Safety Institutes must work toward several crucial objectives as it ventures into these waters.

Short-term priorities

Establish a narrow focus on technical safety measures

The Network should first craft a mandate that remains clear of contentious policy debates, instead focusing on the multitude of technical-based solutions, including model evaluation methods, red-teaming practices, and other safety mechanisms. Such a scope of work would allow the group to avoid political pitfalls and establish a broader body of contributors to improve information sharing. The Network should advise and assist policy-making institutions with technical reporting, but it should preserve its independent integrity by operating as a community of technical experts rather than regulators.

Build capacity in AI safety research

The November event will convene existing AI safety institutes from the United States, United Kingdom, Japan, Canada, and Singapore, with additional representatives from Australia, France, Kenya, South Korea, and the European Union. The growing list indicates a strong potential for the group’s future, but it remains heavily weighted toward higher-income countries in the West, limiting its impact.

Moving forward, the Network should establish outreach channels to researchers and developers in regions without existing safety organizations, particularly in Latin America and Africa, while providing technical resources to policymakers in those countries. These efforts could expand the use of effective evaluation and auditing methods, improving the analysis of model performance before market delivery.

Share updates from industry on frontier AI safety commitments

Despite the current attention on responsible AI, global regulatory efforts remain fragmented and uncoordinated, making voluntary efforts by major developers some of the most impactful guardrails in the field. A cohort of major AI companies pledged increased transparency and dedication to risk management at the Seoul convention in May 2024, and leading US firms have since established the Frontier Model Forum to publicly share research results. The November convening should include a comprehensive report on their current safety projects and a roadmap for continued research.

Long-term goals

Advise standards-setting bodies

Network participants will likely influence AI standards for their respective governments and will therefore consolidate a wealth of knowledge to shape common technical guidelines. While this body lacks the mandate to produce those standards on a global scale, they should advise international organizations already working on these frameworks. Similar organizations have done so in the past, such as the National Institute of Standards and Technology’s collaboration with the International Electrotechnical Commission and the International Organization for Standardization on cybersecurity and privacy, helping to align international approaches and enhance cross-border interoperability.

Establish dialogue channels with major AI policy organizations

Similarly, the Network should engage with other institutions, such as the Global Partnership on Artificial Intelligence and the United Nations’ AI Advisory Body, to ensure their policy recommendations are informed in design and effective in implementation. From cities to multilateral forums, the number of organizations including AI in their scopes of work continues to grow. An integrated engagement process could provide the foundation for consistency and interoperability across the international regulatory spectrum, preventing policy-related conflicts and roadblocks.

Expand membership

While preserving a smaller group could certainly improve expediency and alignment when making decisions, it limits the scope and scale of the Network’s impact. Kenya’s inclusion in the November event indicates a wise intention to expand membership to underrepresented regions. Future iterations of this event should continue to convene more diverse participants, including subject matter experts from academia and civil society to ensure that key sectors with deep knowledge bases can help accelerate outcomes and drive responsible AI innovation and adoption.

Perhaps the most pressing question is whether to invite a Chinese delegation in the future. While including China in policy-related efforts may prove a bridge too far for some members, the Network’s narrower scientific and technical mandate could allow it to include Chinese AI experts in future conversations, particularly considering Beijing’s increased focus on AI safety research. The group will need to weigh the potential benefits of the technical expertise that the leading minds in China’s AI space can offer with the risks of working with a misaligned governance system.

Even as the United States undergoes a political transition in the new year, and even if the mission of the AI Safety Institute changes, AI safety will remain an important and far-reaching issue. The convening later this month is a critical opportunity to establish a clear direction for AI safety research that is bigger than any one country.


Will LaRivee is a resident fellow at the Atlantic Council GeoTech Center.

The post AI safety concerns transcend borders. To meet the challenge, US efforts need to go global. appeared first on Atlantic Council.

]]>
Senator Mark Warner on the top five risks for the next administration to watch https://www.atlanticcouncil.org/blogs/new-atlanticist/senator-mark-warner-on-the-top-five-risks-for-the-next-administration-to-watch/ Fri, 11 Oct 2024 15:42:30 +0000 https://www.atlanticcouncil.org/?p=799686 At an Atlantic Council Front Page event, the senator outlined those risks, ranging from competition with China to the ongoing crisis in Venezuela.

The post Senator Mark Warner on the top five risks for the next administration to watch appeared first on Atlantic Council.

]]>
US Senator Mark Warner (D-VA) wants to “redefine national security.”

Warner, the chairman of the Senate Intelligence Committee, spoke at an Atlantic Council Front Page event on Thursday, co-hosted by RBC Capital Markets, as part of a series of events on the 2024 elections bringing in speakers from both parties.

National security can “no longer” be determined by “who has the most tanks and planes and guns,” Warner argued. “National security now is a technology race with China.”

That race and competition with China—taking place in the economic, energy, technology, and minerals domains—is just one of the risks the next president will have to face during their time in office, Warner explained.

Below are highlights from the conversation, moderated by CNBC Anchor and Senior National Correspondent Brian Sullivan, where the senator outlined five risks he said the future administration would need to watch closely.

Risks abound

  • In addition to the tech race with China, Warner said that the ongoing political crisis in Venezuela needs more attention. “If we don’t make some changes, we could see another mass of Venezuelans leaving, which would put additional strain on the border,” he said. After Nicolás Maduro again declared victory in the country’s latest elections despite ample evidence that he lost handily, Warner said that it might be time to set up a new international contact group for Venezuela. 
  • Third, the next administration will need to prove it cares about Africa, Warner said, especially as the crisis in Sudan continues. “There are more people dying every day in Sudan than Gaza, Lebanon, and Ukraine,” he said. “A little bit of effort” on the part of the United States “would go a heck of a long way.” 
  • Fourth, Warner said that wars in the Middle East present their own risks, but there’s also a “geopolitical move” underway in the region in which Middle Eastern countries are transforming their economies. This provides an opportunity as those countries are considering turning away from China and Russia and instead to the United States for its technology and partnerships. 
  • Finally, Warner pointed to the risk of waning partnerships with Pacific island countries that are playing an “increasing role” on the world stage because of their access to and control of rare earth minerals that lie below the ocean.  
  • “We have ignored these nations,” Warner warned, saying that the United States is now trying to increase connectivity with them by supporting the installation of undersea cables. “This is literally pennies on the dollar in terms of American investment,” he said, but it is the “next frontier.”

Watch the event

The China challenge

  • To compete with China and counter its dominance in processing rare earth minerals, Warner said that the United States needs to get its “act together.” “We’re also going to have to do the processing,” he said, adding that would require new facilities to do so, considering how the United States often sends its extracted minerals to China for processing. “Even the Democrats have got to realize we’ve got to build stuff again in this country.” 
  • “Even if we’re not going to do all the processing here, we need to do it with our friends and allies around the world, and I think this is a huge opportunity,” he said. 
  • Warner argued that China’s Belt and Road Initiative is starting to leave participant countries dissatisfied. “The quality of the workmanship was pretty crummy in a lot of areas and still they’re deep in debt,” he said. 
  • That presents an opportunity for the United States and its partners to close deals—such as for building small modular nuclear reactors. But “we need our own regulatory process to move quicker,” for that to happen, and the Export-Import Bank and Development Finance Corporation need to “take a few more risks,” he said. 

Drawing the line

  • On artificial intelligence (AI), Warner cautioned that Chinese companies, specifically BGI Group, are exploring how the technology can be used to have an impact on biology, drawing data from DNA banks. “You combine AI and DNA mapping, and some of this gets spooky, in terms of like super soldiers,” he said. 
  • On whether the United States will pass some form of AI regulation, Warner cautioned “don’t hold your breath no matter which person wins” the presidency. “Do you do what the Europeans have overdone? We’ve done nothing. There is somewhere in the middle where I think [there’s] smart regulation.” 
  • Warner said that if Vice President Kamala Harris wins the presidency, she should look to not only tackle the China challenge but also ensure that Russian President Vladimir Putin is not successful in Ukraine. “We, along with our allies, have to draw the line against authoritarian regimes,” he said. 
  • The next president may possibly face a post-Putin Russia; Warner said he would like to see the country become more open, but “we have to be prepared for both circumstances,” he cautioned. “You could actually see Russia move further to the right or further authoritarian.” 

Katherine Walla is an associate director of editorial at the Atlantic Council.

Watch the full event

The post Senator Mark Warner on the top five risks for the next administration to watch appeared first on Atlantic Council.

]]>
Italy and UNDP: How the new AI Hub for Sustainable Development will strengthen the foundations for growth in Africa https://www.atlanticcouncil.org/blogs/new-atlanticist/italy-and-undp-how-the-new-ai-hub-for-sustainable-development-will-strengthen-the-foundations-for-growth-in-africa/ Fri, 04 Oct 2024 16:17:55 +0000 https://www.atlanticcouncil.org/?p=797111 The United Nations Development Programme and Italian government initiative aims to foster both innovation and sustainability in Africa.

The post Italy and UNDP: How the new AI Hub for Sustainable Development will strengthen the foundations for growth in Africa appeared first on Atlantic Council.

]]>
As artificial intelligence (AI) continues to advance at a rapid speed, the importance of building an inclusive digital and AI ecosystem that benefits everyone cannot be overstated. Nations must now work together to harness the power of AI for sustainable development, shaping a future where innovation serves the greater good, strengthens the social fabric, and fosters equality and democracy. This vision of inclusive AI development will be a central theme at the upcoming Group of Seven (G7) ministerial meeting scheduled in Rome on October 10, where Italian and African ministers and stakeholders will convene to discuss the future of technology and development.

Central to Italian Prime Minister Giorgia Meloni’s foreign policy is the Mattei Plan, which places African nations as equal partners at the forefront of Italy’s international agenda. This strategy aims to drive sustainable development across sectors, recognizing the role that AI and other emerging technologies can play in driving innovation and industrial growth.

Italy’s commitment to leveraging AI for sustainable development aligns with the longstanding mission of the United Nations Development Programme (UNDP), which operates in more than 170 countries and territories worldwide. Both Italy and the UNDP recognize that it is imperative to create space for developing countries to not just use AI, but to become active participants and equal partners in its development, governance, and use. This approach can ensure that its benefits are harnessed responsibly, equitably, and sustainably for long-term development impact.

In this landmark year for digital development, Italy’s G7 presidency has paved the way for a significant global partnership. The Ministry of Enterprises and Made in Italy (MiMIT), the UNDP, and private sector entities in Africa have united around a common goal of promoting an inclusive, sustainable, and country-focused approach to AI. The G7, representing major economies across North America, Europe, and Asia, provides a uniquely agile forum for nurturing these vital partnerships, not only supporting technological advancements, but also reinforcing a commitment to the universality of human rights in the digital age.

The AI Hub for Sustainable Development, a collaboration that we have helped to co-develop, is a concrete outcome of these efforts. It intends on shaping new dialogues and tangible actions with African partners. The immense potential of Africa, coupled with the urgent need to accelerate progress toward the United Nations’ Sustainable Development Goals, underscores the importance of a multifaceted, collaborative, and inclusive approach.

Ask Google, Microsoft, Amazon Web Services, Leonardo, Sony, iGenius, Translated, Kytabu, InstaDeep, or any number of other companies. Ask Masakhane, the open-source initiative “by Africans for Africans” focused on natural language processing research, or the networks of excellence, the African Institute for Mathematical Sciences (AIMS) and the Next Einstein Forum, pioneering and nurturing innovation and African talent in STEM fields. The answers you will get from them will all point to the realization that Africa is not simply gearing up for an AI revolution—it’s already underway.

With 60 percent of its population under the age of twenty-five, Africa presents a unique opportunity for AI innovation. The potential is staggering: by 2030, AI could add $2.9 trillion in value to the African economy—the equivalent of increasing annual growth in gross domestic product by 3 percent. In 2021 alone, 640 African tech startups raised $5.2 billion, reflecting year-on-year growth of 92 percent.

​Against this vibrant backdrop and recognizing both the immense potential and the significant challenges, MiMIT and UNDP have conceived an innovative and inclusive platform—the AI Hub for Sustainable Development. This platform recognizes the pivotal role of the private sector and is aimed at strengthening local AI ecosystems in partnership with African nations. This initiative seeks to “empower innovators, bridge the digital divide, and unlock the transformative power of AI” to create market opportunities. The AI Hub’s development has been informed by extensive consultations with G7 partners, African countries, and key stakeholders both within and beyond Africa. It aligns with the African Union’s vision of digital transformation and has received the endorsement of the G7 Digital, Tech, and Industry ministers at their meeting in Verona in March and at the G7 Leaders’ Summit in Borgo Egnazia in June.

The AI Hub aims to promote a paradigm shift by driving investments in the foundations of AI to deliver accelerated impact in areas such as agriculture, health, energy, education, water, and infrastructure. Set to become operational in 2025, the AI Hub will initially focus on the nine African countries identified by the Mattei Plan: Algeria, Egypt, Ethiopia, Kenya, Ivory Coast, Morocco, Mozambique, the Republic of Congo, and Tunisia. Institutional actors and the private sector will collaborate on data pipelines, green computational power, talent development, and creating an enabling ecosystem, all of which are essential for AI systems and their potential for sustainable development.

To prepare for the AI Hub’s activities, MiMIT, UNDP, and African partners, such as the African Union, have conducted collaborative initiatives involving governments, universities, and civil society organizations. The partners have contributed to the first public report on the codesign of the AI Hub for Sustainable Development, which analyzes the foundational elements underpinning AI and the potential role of the AI Hub in accelerating responsible private sector innovation to unlock their potential.

The Italian G7 presidency and UNDP have also launched the AI Hub for Sustainable Development Co-Design: Startup Acceleration Pilot and the Local Language Partnerships Accelerator Pilot programs. Both pilots are designed to inform the development and design of the AI Hub, with input and participation from the African Union, AIMS, Cassa Depositi e Prestiti Ventures, the Italian Innovation and Culture Hub, the International Telecommunications Union, and other global, regional, and local private sector partners.

These pilot programs aim to foster innovation and partnerships in data, green computing, and talent pipelines—the three critical pillars underpinning local AI ecosystems in Africa. By focusing on these foundational elements, the Startup Accelerator Pilot Programme seeks to address the need for an integrated private sector approach to mitigate risks and unlock the transformative power of AI for sustainable development. Meanwhile, the Local Language Partnerships Accelerator Pilot is designed to assess effective and ethical partnerships to accelerate the development and adoption of AI language technologies for sustainable local innovations.

As the codesign of the AI Hub progresses, we’re continuing to conduct in-country consultation across the nine focus countries to inform the AI Hub’s strategy and programming. This process, along with ongoing collaborations, such as the startup accelerator event, which will be hosted by the Italian Innovation and Culture Hub and the G7 Italian presidency in San Francisco in November, reinforce our belief that international cooperation must be action-oriented. This is also reflected in the broad-based coalition of the Africa Language Fund, which is still being designed. We envision this cooperation taking shape through joint project implementation, knowledge sharing, and co-investment models. Success in these efforts will be determined by stakeholders making concrete commitments, establishing formal partnerships, and developing a clear roadmap for the AI Hub’s launch and initial operations. We believe that this approach is crucial to create the necessary guardrails and foundations for local stewardship of AI. 

As we move toward the AI Hub’s operational kickoff in 2025, we remain intent on strengthening the foundations of AI to generate industrial growth in Africa. Through these efforts, we’re supporting an AI revolution in Africa that is already underway, ensuring it delivers sustainable and equitable benefits across the continent for everyone.


Vincenzo Del Monaco is minister plenipotentiary at the Ministry of Enterprises and Made in Italy and co-chair of the G7 Digital and Tech Working Group.

Eva Spina is head of department for digital connectivity and new technologies at the Ministry of Enterprises and Made in Italy, and co-chair of the G7 Digital and Tech Working Group.

Keyzom Ngodup Massally is head of digital and AI programmes at the United Nations Development Programme.

The post Italy and UNDP: How the new AI Hub for Sustainable Development will strengthen the foundations for growth in Africa appeared first on Atlantic Council.

]]>
Ukraine needs international investors to maintain defense tech momentum https://www.atlanticcouncil.org/blogs/ukrainealert/ukraine-needs-international-investors-to-maintain-defense-tech-momentum/ Tue, 01 Oct 2024 21:21:28 +0000 https://www.atlanticcouncil.org/?p=796461 Ukraine's rapidly expanding defense tech sector can play a game-changing role in the war against Russia but Ukrainian companies need international investment, writes Ukraine's Minister for Digital Transformation Mykhailo Fedorov.

The post Ukraine needs international investors to maintain defense tech momentum appeared first on Atlantic Council.

]]>
A Ukrainian company that creates AI solutions for drones recently secured funding from a consortium of four foreign investors worth almost $3 million. This deal is one of the largest individual investments in the Ukrainian defense tech industry since the beginning of Russia’s full-scale invasion. It is part of a growing trend as investors increasingly recognize the appeal of Ukrainian defense tech innovations. Since the beginning of 2024, the sector has attracted more than $20 million in investment.

International funding is crucial as Ukrainian defense tech manufacturers seek to improve technologies and scale up production. In an environment where the state does not have unlimited resources, attracting private capital to the sector is a no-brainer. It is therefore a strategic priority for the Ukrainian authorities to create the kind of business climate that can appeal to international investors.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

One of the basic requirements for any investor is a transparent marketplace with clear and simple rules. Since the start of the full-scale Russian invasion in February 2022, Ukraine has adopted around twenty significant laws and resolutions aimed at accelerating the development of the domestic defense tech market.

Measures have included tax and customs duty reductions, the relaxation of permit and licensing requirements, and the minimizing of bureaucratic waiting periods. These steps have already had a significant impact on the domestic UAV industry, helping to fuel a hundred-fold increase in annual drone production in 2023. A similar approach is now being adopted toward other segments of the defense sector including electronic warfare, robotics, and ammunition production.

The Ukrainian authorities understand that international investors are primarily driven by a desire to make money. In order to attract investment, Ukrainian defense tech companies must therefore be able to demonstate a credible business plan and a pathway to profitability. To help establish the necessary conditions, we are taking steps to conclude more government procurement contracts with Ukrainian producers. For example, the Ukrainian state has signed agreements to procure more than one million domestically produced drones in 2024, compared to around three hundred thousand drones during the previous year.

Meanwhile, the number of Ukrainian companies able to compete for state contracts is steadily increasing. In 2022, only seven Ukrainian drone producers were eligible for government contracts. Today, the figure has risen to more than eighty companies. To maintain this upward trend, the Ukrainian authorities provide additional support to help domestic companies meet NATO standards.

Placing more orders with Ukrainian producers is not enough, of course. It is also important to remain agile and concentrate Ukraine’s limited resources on the acquisition of technologies that offer the biggest practical advantages in today’s rapidly evolving military environment. For example, in September 2024, the Ukrainian government launched its first tender for the purchase of ten thousand first person view (FPV) drones equipped with machine vision guidance.

Another key challenge is connecting investors with developers. Ukraine’s defense tech sector offers potentially exciting opportunities for investors, but it is also an inherently difficult environment to navigate without local knowledge. The Ukrainian authorities have sought to address this by establishing the Brave1 cluster, which aims to streamline cooperation between investors, private sector companies, state agencies, and the Ukrainian military. At present, Brave1 is working with more than one hundred and fifty investors from over thirty countries.

Current initiatives include Investor Demo Days, where Ukrainian developers can showcase their products and attempt to attract funding. On October 3-4, Kyiv will host the largest international investment summit of the war so far dedicated to the Ukrainian defense tech industry. This event should provide an indication of the progress since 2022 toward making the country’s defense tech sector attractive to investors.

In order to win the war, Ukraine needs game-changing technologies that can help overcome Russia’s conventional military advantages and shift the battlefield situation in our favor. This cannot realistically be achieved by relying on an improvised ecosystem of defense tech startups operating out of garages and apartments. Instead, Ukraine needs the kind of powerful and well-resourced defense tech industry that is only possible through significant investment and cooperation with international partners. If this goal can be achieved, the Ukrainian military will be able to receive the tools they need to finish the job of defeating Russia.

Mykhailo Fedorov is Ukraine’s Vice Prime Minister for Innovations and Development of Education, Science, and Technologies, and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values, and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia, and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine needs international investors to maintain defense tech momentum appeared first on Atlantic Council.

]]>
China is ‘aiding and abetting the Russian war machine,’ says US Ambassador to China Nicholas Burns https://www.atlanticcouncil.org/blogs/new-atlanticist/china-is-aiding-and-abetting-the-russian-war-machine-says-us-ambassador-to-china-nicholas-burns/ Thu, 26 Sep 2024 18:11:57 +0000 https://www.atlanticcouncil.org/?p=794972 At the Transatlantic Forum on GeoEconomics in New York, the US ambassador spoke about Beijing’s ties with Moscow and about how the United States is responding to Chinese manufacturing overcapacity.

The post China is ‘aiding and abetting the Russian war machine,’ says US Ambassador to China Nicholas Burns appeared first on Atlantic Council.

]]>
“We’ve got to be careful about how we handle this relationship,” US Ambassador to China Nicholas Burns said on Thursday about US-China relations. “We’re systemic rivals,” he continued, “And I think we’ll be systemic rivals well into the next decade, perhaps even beyond.”

Burns joined the Transatlantic Forum on GeoEconomics in New York virtually from Beijing for a keynote address and discussion with Atlantic Council GeoEconomics Center Senior Director Josh Lipsky. The ambassador opened his remarks with “good news” and “bad news” about the US-China relationship today.

The good news, he said, is that the US-China relationship has stabilized over the past year, especially after US President Joe Biden’s meeting with Chinese leader Xi Jinping in California in November 2023. Since then, both sides have worked to reopen and strengthen military-to-military and other cabinet-level channels as well as slow the flow of precursor chemicals used to make fentanyl.

But the other side of the coin, Burns continued, is that the US-China relationship remains “extremely competitive” and will be for the foreseeable future. This competition plays out in the security realm, along with US allies and partners in the Indo-Pacific, but also—and increasingly—on issues of technology and economics. Here are more highlights from the conversation.

‘We’re not going to tolerate a second China shock’

  • “I think both technology and economics have really taken center stage in the US-China relationship,” Burns said. In part, this is because the Chinese economy is in the midst of a structural transition. Amid cooling in China’s property and infrastructure sectors, China is working to keep up growth by manufacturing more—two-to-three times domestic demand in some cases, Burns said. 
  • With more goods than the domestic market could possibly absorb, China is now “trying to dump those products at artificially low prices in markets around the world,” Burns said. The less expensive goods undercut manufacturers in other countries. This is most apparent now with electric vehicles (EVs), lithium batteries, and solar panels, but it could soon extend to biotechnology and robotics. “What the Chinese are engaged in is patently unfair under international trade,” the ambassador said. 
  • “We are not going to tolerate a second China shock,” Burns said, noting that “well over one million American manufacturing jobs” were lost in the first China shock in the 2000s. 
  • In this case, the United States is pushing back with 100 percent tariffs on Chinese EVs, along with other measures, and it is not alone. “I think many other countries are reacting the same way against this overcapacity problem of the People’s Republic of China,” Burns said, noting the “spirited debate” in the European Union and new tariffs against Chinese exports by South Africa, Turkey, Chile, Brazil, Mexico, and Canada.
  • “If there’s one lesson, I think that all of us around the world in every single country learned from the pandemic,” Burns said, it is “don’t be reliant on a single source for critical materials, critical minerals, [and] critical supplies that you need for your own economy.”

China is not backing away from its ‘no limits’ partnership with Russia

  • Responding to comments by former Secretary of State Condoleezza Rice at the Atlantic Council’s Global Future Forum about the close ties between China and Russia, Burns said that there is “no indication that China is going to back away from its ‘no limits’ partnership with Russia.”
  • “The Chinese like to say that they’re neutral” in Russia’s war against Ukraine, Burns said. But the evidence does not support this. Instead, China is “aiding and abetting the Russian war machine,” the ambassador explained. 
  • Beijing continues to give political and diplomatic support to Moscow, including at the United Nations Security Council, Burns said. And while the United States does not currently believe that China is providing Russia with lethal military assistance—as in “complete weapon systems,” he clarified—Beijing is sending badly needed components to Russia that the Kremlin relies on for its ongoing war effort.
  • Chinese components and technologies are so important to Russia, Burns said, that “a lot of people think that the Russian defense industrial base now is stronger than it was even at the beginning of the war in large part because of the assistance they have from China.”

The military risks of artificial intelligence

  • So, where does the US-China relationship go from here? One area to watch is artificial intelligence (AI), which Burns said both sides are “beginning to grapple with.” 
  • With AI, as well as with biotechnology and quantum computing, the technology is developing in the military sphere and in the commercial marketplace, he explained, and this creates both opportunities and risks. 
  • Washington and Beijing are at “a very early stage” in conversations on AI, Burns said. But he noted that “more sophisticated, deeper” discussions are needed. “We would like to have an in-depth discussion, particularly to address the risks associated with AI in the military sphere,” the ambassador said. “We hope the Chinese will be ready to meet us to have that dialogue.” 
  • At the same time, Burns said, it is also important for the United States to work with like-minded democratic countries on AI and other technologies.

John Cookson is the New Atlanticist editor at the Atlantic Council.

The post China is ‘aiding and abetting the Russian war machine,’ says US Ambassador to China Nicholas Burns appeared first on Atlantic Council.

]]>
Condoleezza Rice: ‘Do you want Russia and China to shape the international environment?’ https://www.atlanticcouncil.org/blogs/new-atlanticist/condoleezza-rice-do-you-want-russia-and-china-to-shape-the-international-environment/ Tue, 24 Sep 2024 21:03:10 +0000 https://www.atlanticcouncil.org/?p=794295 History has shown the dangers of isolationism, the former US secretary of state said to Atlantic Council President and CEO Frederick Kempe at the Global Future Forum in New York.

The post Condoleezza Rice: ‘Do you want Russia and China to shape the international environment?’ appeared first on Atlantic Council.

]]>
“Whoever inhabits the White House in January needs to recognize that the United States doesn’t have a choice now but to be involved in the world and to try to shape the international environment,” former US Secretary of State Condoleezza Rice said Tuesday during an Atlantic Council Front Page event at the inaugural Global Future Forum.

“Great powers don’t mind their own business. So, the real question is: Do you want Russia and China to shape the international environment? Or do we want to shape the international environment with our allies?”

Rice’s discussion with Atlantic Council President and CEO Frederick Kempe covered a wide range of pressing global issues, from Russia’s war in Ukraine and China’s moves in the Indo-Pacific to the future of artificial intelligence (AI) and the stakes of the US presidential election in November.

The four horsemen ride again

  • “It’s a complicated time. It’s a dangerous time. And I just hope that we in the United States can recognize it as such and not fall into a sense that we can simply leave the world to itself,” Rice said. Expanding on a recent essay in Foreign Affairs, Rice described what she sees as the perils of US isolationism. 
  • She identifies the impulse to turn away from the world as one of “four horsemen of the apocalypse,” along with populism, nativism, and protectionism. All four “seem to be riding again,” she explained.
  • Yet history has shown the dangerous consequences of those impulses being given free rein. “Every time we have tried to withdraw, we’ve paid a price for it,” she said, noting early US hesitancy in World War I and World War II as examples.

‘Slam them together’

  • One reason the current moment is so dangerous, Rice explained, is the return of great power conflict. We’re in a period in which conflict with Russia, China, and Iran is “actually territorial in its content,” with these authoritarians often seeking to expand their boundaries. Moreover, these powers are working together. “They are coordinating because they have one thing in common,” Rice said. “They want to see American power pushed out of these regions, and they want a different kind of international order.” 
  • This coordination has its limits, of course, but Rice pointed out that the World War II axis powers were not “all that friendly” and yet “made a lot of trouble.”
  • While some US policymakers and commentators argue for trying to pull this rising axis of authoritarians apart, Rice calls for the opposite approach. “My view is slam them together, instead. Make them deal with the consequences of the fact that they don’t actually have all that much in common.”

Eye to eye with a two-speed Russian president

  • “Vladimir Putin has kind of two speeds,” Rice said of the Russian president. One is to try to humiliate and the other is to try to intimidate, she said. “In a funny sort of way, we had a reasonably good relationship because he knew that I was a Russianist,” and thus would pay Moscow proper attention. But in Putin she identified a sense of “insecurity about Russia’s place in the world” that explains some of how he has ruled his country. 
  • Putin once told Rice that Russia has only been great when it has been ruled by great men, such as Alexander II and Peter the Great. This revealed to her that the Russian president thinks of himself in “a kind of messianic state,” she explained. “He’s a nationalist, and he wants to reestablish the Russian Empire. And you can’t have a Russian Empire if there’s an independent Ukraine.”
  • Soon, Washington and Kyiv will need to start thinking about what constitutes a “prosperous, secure, united Ukraine.” Part of this will be deciding what exactly a US security guarantee to Ukraine will include. “I can think of a lot of countries that we have secured even if they didn’t have full territorial integrity,” she said, noting West Germany during the Cold War and South Korea today. 

The problem is Beijing

  • “The Indo-Pacific is a much more dangerous place, not because of American policy, but because of Chinese policy,” Rice said. And there’s one man she holds responsible: Chinese leader Xi Jinping has changed “the character of China’s interaction with the world,” choosing to double down on political control and dismissing further economic liberalization. 
  • The follow-on effects of Xi’s decision have harmed everything from technology development and supply chains to maritime security, Rice said. To anyone who asks how to improve the US-China relationship, she has this straightforward answer: “That has to start in Beijing.”

‘We saw what authoritarians do during COVID’

  • Rice—who lives in Silicon Valley, where she leads the Hoover Institution at Stanford University—described herself as a “techno-optimist,” but she also acknowledged that AI and other new technologies pose dangers as well as opportunities. 
  • “I’m quite aware that human beings have tended to be way better at the knowledge part of technology than at the wisdom part. We need to come to a better understanding of how we promote the promise of these technologies while being cognizant of where this could go wrong.”
  • The double-edged qualities of new technologies make it even more important for the United States to work with its allies and partners, she explained. Free and open nations must both win the global technological race and assess any potential risks in a transparent manner. 
  • The alternative, Rice said, is to cede the advantage to closed countries such as China. “We saw what authoritarians do during COVID. They hide the facts. They won’t answer questions, and so let’s make sure that we win this race.” 

John Cookson is the New Atlanticist editor at the Atlantic Council.

The post Condoleezza Rice: ‘Do you want Russia and China to shape the international environment?’ appeared first on Atlantic Council.

]]>
Assessing China’s AI development and forecasting its future tech priorities https://www.atlanticcouncil.org/content-series/strategic-insights-memos/assessing-chinas-ai-development-and-forecasting-its-future-tech-priorities/ Wed, 18 Sep 2024 13:00:00 +0000 https://www.atlanticcouncil.org/?p=792539 The Atlantic Council convened experts to gather insights into China’s technology priorities today and in the future.

The post Assessing China’s AI development and forecasting its future tech priorities appeared first on Atlantic Council.

]]>
TO: Policymakers and technology policy strategists

FROM: Hanna Dohmen

DATE: September 18, 2024

SUBJECT: Assessing China’s current AI development and forecasting its future technology priorities

In July 2024, the Atlantic Council Global China Hub (AC GCH) and the Special Competitive Studies Project (SCSP) convened experts and policymakers in the second of a two-part private workshop series to gather insights into China’s technology priorities today and in the future. Participants discussed Beijing’s posture on artificial intelligence (AI) development and deployment today, including the hurdles China’s AI industry faces amid US-China technology competition, as well as Beijing’s policy priorities over the next decade. This memo summarizes insights gathered during the workshop.

Strategic context

In today’s strategic competition between the United States and China, both countries seek to bolster their nations’ innovation ecosystems and enhance their ability to develop and deploy breakthrough technologies. The United States is committed to maintaining US technological leadership in the long term, as Secretary of Commerce Gina Raimondo demonstrated at the Reagan National Defense Forum in December 2023, when she stated that “America leads the world in artificial intelligence. America leads the world in advanced semiconductor design, period . . . We’re a couple years ahead of China. No way are we going to let them catch up. We cannot let them catch up.”

China’s strategic focus has long been on “self-reliance and self-improvement (自立自强).” In fact, on June 24, 2024, Chinese President Xi Jinping delivered a speech at a major Chinese science and technology (S&T) conference in which he emphasized this longstanding ambition: “Since the 18th Party Congress [in 2012], the Party Central Committee has promoted the implementation of the innovation-driven development strategy in an in-depth way, proposed the strategic task of accelerating the construction of an innovation-oriented country (创新型国家), established the goal of building China into an S&T powerhouse by 2035, continuously deepened S&T structural reform (科技体制改革), fully stimulated the enthusiasm, initiative, and creativity of S&T personnel, and vigorously promoted the building of self-reliance (自立 自强) in S&T.”

To sustain its own growth in technology leadership, the United States has concentrated its efforts on computational power and AI. Thus far, a key pillar of the US strategy has been to slow China’s progress in developing advanced-node semiconductors, a critical input needed to power AI. US export controls on advanced compute, semiconductor manufacturing equipment, and supercomputing—as well as regulations that will prohibit and monitor US investments in Chinese AI, quantum computing, and semiconductor companies—are part of a broader strategy to maintain US leadership and slow China’s progress.

As each country advances its own agenda, the implications of this competition will continue to shape the future of technology development and geopolitics. Given the rapid advancements in AI, the current strategic environment is complex and fast evolving. As such, it is critical to not only assess the current state of technological competition, but to also look ahead at future technology priorities both in the United States and in China.

Benchmarking China’s AI progress and challenges

One of the key questions in this strategic competition is China’s position in AI development relative to that of the United States. However, workshop participants highlighted that this depends entirely on the lens through which one views competitiveness. Should competitiveness in AI be assessed by the size of models and processing speeds? Should it be about which ecosystem can leverage AI to deliver the most tangible economic benefits in terms of revenue growth and operational and efficiency improvements?

While participants agreed that these questions are critical when considering the long-term objectives of either country’s strategies, current assessments primarily focus on which models are the biggest and fastest. Here, views range from estimating that China’s AI model development is six to twenty-four months behind that of the United States. For example, in June 2024, Kai-Fu Lee, the chief executive officer of the Chinese AI startup 01.AI, claimed that the company is six to nine months behind US AI leaders but is catching up rapidly. Some experts, including Joe Tsai, Alibaba co-founder and chairman, suggest that China’s AI companies are “possibly two years behind” US AI companies.

Technical performance and metrics of Chinese models are an important measure of progress, but workshop participants emphasized how a more holistic view of the AI stack and broader innovation ecosystem is necessary to contextualize technological advancement. Three prevailing dynamics that will set the tone for future AI development in China emerged from this discussion: AI ecosystems, compute infrastructure, and regulatory landscapes.

More players, less capital

Workshop participants noted how China’s AI model development ecosystem differs significantly in scale and structure from that of the United States. In the United States, a small number of big players—such as OpenAI, Meta, Google, and Anthropic—dominate the field. These companies leverage their partnerships with hyperscalers and have access to the necessary compute needed to power their AI development and deployment. In contrast, China has a much larger number of AI companies developing models, which participants said is leading to a dilution of investment and compute resources. For example, as of August 2024, the Cyberspace Administration of China has approved a list of more than 180 large language models (LLMs) for general use, illustrating the broad swath of Chinese tech companies fighting for domestic market share.

Not only are these companies competing for a slice of the market, but they are also competing for funding amid an economic slowdown and a downturn in China’s VC industry. Participants stressed that while many Chinese startups have attracted investments from big tech companies, such as Alibaba and Tencent, many investors remain skeptical about AI startups’ abilities to generate revenue in the short term. In search of economically productive investments, many Chinese venture-capital firms are looking to diversify their risk by pooling resources, suggesting a more dispersed funding environment. Given both funding and hardware constraints on Chinese AI developers, participants suggested that China might succeed in advancing a few companies or AI labs by pooling resources, but these efforts will need to be selective and targeted, reducing the likelihood of substantial returns. Ultimately, participants suggested that this environment in China’s AI market is likely to lead to increased industry consolidation.

US export controls loom large

US export controls, and those of allied countries, are affecting China’s access to advanced computing resources, imposing significant constraints on both AI training and inference (i.e., AI model development and deployment). As China is unable to legally acquire leading-edge AI chips such as NVIDIA’s A100, it increasingly needs to rely on its own domestically designed and manufactured alternatives. Huawei’s Ascend 910B is China’s closest competitor, though reports suggest it lags in performance for training LLMs. While Chinese chip designers have made notable progress, China’s production of these chips is significantly constrained. These resource challenges make both AI training and inference for Chinese companies more expensive and less efficient. Participants suggested that these challenges could particularly hinder the deployment of AI models at scale in China.

Asserting control versus fostering innovation

Participants also discussed the impact of AI regulations, particularly China’s censorship standards, on AI development. Participants highlighted that there are two possibilities for how China’s strict regulations can impact AI innovation. On one hand, these regulations could hinder China’s ability to develop competitive AI models by imposing strict controls on the outputs of models. Conversely, it is also possible that if China navigates these challenges effectively, Chinese AI developers might gain valuable insights into how to make AI models safer. Participants believe the former is likely to be true, but this remains an open question.

China’s forward-looking technology priorities

In addition to understanding what China’s current AI development looks like, it is also important to consider the country’s strategic priorities for future technologies. The discussion highlighted that, looking ahead, AI will be one of the key elements to developing and advancing future technologies.

Future manufacturing: Looking toward the future, participants believe that China’s motivations for advancing AI predominantly center around enhancing industrial efficiency, particularly improvements in manufacturing and automation. In early 2024, seven Chinese ministries and government bodies, including the Chinese Ministry of Industry and Information Technology and the Ministry of Science and Technology, released a guidance document that identifies six “future industries” as priorities for China’s industrial policy. This document emphasizes that China should “seize the opportunities of a new round of S&T revolution and industrial transformation, focus on the main battlefield—namely the manufacturing industry—to accelerate the development of future industries, and support the advancement of new-style industrialization (新型工业化).”

In light of its current economic slowdown and demographic challenges, China emphasizes converting new technologies, including AI, into economically productive applications. Participants highlighted that there is less focus on LLMs for chatbots in China; instead, there is more focus on the industrial applications that LLMs can help advance and streamline. For example, the same guidance document suggests “utilizing artificial intelligence (AI), advanced computing, and other technologies to precisely identify and cultivate high-potential future industries.”

Robotics: China’s grim demographic outlook has also driven the country’s deployments of robots. The country’s working-age population is rapidly shrinking and its birth rates remain concerningly low. Some estimates suggest that China’s working-age population could decrease by an annual average of 0.83 percent between 2022 and 2035. In part to address the economic concerns prompted by these demographic shifts, China is focusing on increasing productivity through industrial robots. Chinese firms deployed nearly three hundred thousand industrial robots, while Japan and the United States deployed approximately fifty thousand and forty thousand robots, respectively. Indeed, China’s installation of industrial robots has increased by around 13 percent since 2017. The United States’ robot growth rate, however, pales in comparison at just 4 percent over the same period. Participants suggested that this will continue to be a significant priority for China in the coming years. Moreover, humanoid robots—machines with physical features and behaviors that resemble those of humans—are a key area of robotics that participants expect China to prioritize.

Biotechnology: Participants also highlighted China’s focus on biotechnology. Specifically, China is stressing the need for innovations in cell and gene technology, synthetic biology, and bioengineered breeding, as well as medical services empowered by technologies such as AI. This once again emphasizes China’s ambitions to utilize AI to advance other critical and emerging technologies. As a result, participants argued, the United States must put greater emphasis on its own biotechnology advancements. Biotechnology presents complex risks due to its diffuse applications and potential health benefits, making it a crucial strategic area for both the United States and China.

Fundamental research: Beijing is also redoubling its investments in fundamental research, recognizing systemic weaknesses in developing scientific and technological breakthroughs. In a June speech, Xi said, “although China’s scientific and technological undertakings have made significant progress, its original innovation capabilities are still relatively weak, with some key core technologies being controlled by others, and there being a shortage of top scientific talent. There is an urgent need to further enhance the sense of urgency, intensify efforts in scientific and technological innovation, and seize the strategic heights of technological competition and future development.” In March, Beijing announced that it was raising national research and development spending by about 10 percent, signaling how fundamental research will be a rising priority amid geopolitical tensions.

Participants argued that while scientific collaboration with China holds significant potential benefits, it is essential to navigate it carefully to avoid contributing to military applications. The challenge lies in balancing collaboration with security concerns, particularly in areas prone to dual-use technology risks. China boasts strong scientific capabilities and some of the world’s leading scientists, which underscores the importance of engaging in strategic research partnerships while safeguarding against potential military exploitation.

Conclusion

AI serves as the central thread linking China’s strategic focus across various emerging technologies, including advanced manufacturing, robotics, biotechnology, and many more. China’s broader future technology priorities reflect a comprehensive approach to leveraging AI in diverse fields, from smart manufacturing and quantum computing to biotechnology and space exploration. The country’s heavy investment in these areas demonstrates its commitment to achieving technological sovereignty and economic resilience. For the United States, this highlights the need for continued investment in AI and biotechnology, as well as careful management of international research collaborations to protect national security interests and maintain US technological leadership. Ultimately, the evolving technological landscape underscores the importance of AI as a key driver of technological progress and competition on the global stage.

About the author

Acknowledgements

This strategic insights memo was written and prepared with the support of the Atlantic Council’s Global China Hub and the Special Competitive Studies Project.

The Special Competitive Studies Project (SCSP) is a nonpartisan, nonprofit initiative with a clear mission: to make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society.

Global China Hub

The Global China Hub researches and devises allied solutions to the global challenges posed by China’s rise, leveraging and amplifying the Atlantic Council’s work on China across its fifteen other programs and centers.

The post Assessing China’s AI development and forecasting its future tech priorities appeared first on Atlantic Council.

]]>
Derentz quoted in Gzero on AI’s role in increasing grid reliability and resilience https://www.atlanticcouncil.org/insight-impact/in-the-news/derentz-quoted-in-gzero-on-ais-role-in-increasing-grid-reliability-and-resilience/ Tue, 17 Sep 2024 18:14:00 +0000 https://www.atlanticcouncil.org/?p=801617 The post Derentz quoted in Gzero on AI’s role in increasing grid reliability and resilience appeared first on Atlantic Council.

]]>

The post Derentz quoted in Gzero on AI’s role in increasing grid reliability and resilience appeared first on Atlantic Council.

]]>
AI in cyber and software security:  What’s driving opportunities and risks? https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/ai-in-cyber-and-software-security-whats-driving-opportunities-and-risks/ Mon, 19 Aug 2024 20:14:00 +0000 https://www.atlanticcouncil.org/?p=817512 This issue brief discusses the drivers of evolving risks and opportunities presented by generative artificial intelligence (GAI), particularly in cybersecurity, while acknowledging the broader implications for policymakers and for national security.

The post AI in cyber and software security:  What’s driving opportunities and risks? appeared first on Atlantic Council.

]]>

Table of Contents

Abstract

This paper discusses rapid advancements in artificial intelligence (AI), focusing on generative artificial intelligence (GAI) and its implications for cybersecurity and policy. As AI technologies evolve, they present both opportunities and risks, necessitating some understanding of what drives each. This is crucial not only for harnessing AI’s capabilities in cybersecurity—where AI can both defend against and potentially enhance cyber threats—but also in considering broader national security implications. Throughout, the issue brief highlights the importance of acknowledging the long history and varied paradigms within AI development. It also emphasizes the need to consider how AI technologies are integrated into larger software systems and the unique risks and opportunities this presents. Finally, the brief calls for a more nuanced understanding of AI’s impact across different sectors. 

Introduction 

The rapid pace of technological improvement and the resulting groundswell of innovation and experimentation in artificial intelligence (AI) has prompted a parallel conversation in policy circles about how to harness the benefits and manage the potential risks of these technologies. Open questions in this conversation include how to map or taxonomize the set of known risks, how to assign responsibility to different actors in the ecosystem to address these risks, and how to build policy structures that can adapt to manage “unknown unknowns” (e.g., AI-related risks that are hard to predict at present). Then, add in the question of how to do all of the above while preserving some essential abilities: the broader public’s to express their preferences, the research community’s to innovate, and industry’s to commercialize responsibly. Each of these will be a foundation for realizing the potential benefits of generative artificial intelligence (GAI) innovations and preserving the US edge in AI development to the benefit of its economic productivity and security. 

This report focuses on the risks and opportunities of AI in the cyber context. Current GAI systems have proven capabilities in writing and analyzing computer code, raising the specter of their usefulness to both cybersecurity defense and offense. Cybersecurity is, by its nature, an adversarial context in which operators of information systems compete against cybercriminals and nation-state hackers. Thus, if and when AI provides a “means” to improve cybersecurity capabilities, there will be no shortage of actors with “motives” to exploit these capabilities for good and ill. As critical infrastructure and government services alike increasingly rely on computing to deliver vital goods, cybersecurity questions are also increasingly questions of national security, raising the stakes for appraising both cyber opportunity and risk. 

Cybersecurity is far from the only AI application that may create opportunity or risk. The harms of non-consensual intimate imagery and harassment, the manufacture of bioweapons, the integration of biased or flawed outputs into decision-making processes, or other areas of AI risk will take different forms and demand varying mitigations. The factors that drive risk and opportunity in the cyber context may provide useful insight across other contexts as well—the authors of this paper respectfully leave it to experts in those other fields to draw from its findings as much or as little as they suit. 

An important note on scope: an all-too-frequent assumption in contemporary policy conversations is that AI is synonymous with GAI. Yet—as this paper later discusses—GAI is merely the latest and greatest innovation from a decades-old field in which different paradigms and approaches to crafting nonhuman intelligent systems have risen and fallen over time. This work focuses on capabilities shown—or suggested—by current AI systems, including GAI, because these examples provide a grounded basis for reasoning about AI capabilities and accompanying risks and opportunities. Where appropriate, the report mentions or considers other AI paradigms that could prove relevant to risk and opportunity in the cybersecurity context. The report weighs, as well, not just standalone models but also “AI systems” that involve AI models embedded into broader software systems, such as an AI model paired with a code interpreter or a Retrieval-Augmented Generation (RAG) system.1, 2

Opportunities from AI in the cybersecurity context 

In the broadest sense, the opportunities of AI in the cybersecurity context arise from their potential use to improve a defender’s lot in cybersecurity, whether by helping secure code or by helping make cybersecurity tasks easier or more efficient for defenders. Many of these opportunities arise from GAI models’ ability to read, analyze, and write code. 

A. Finding and fixing vulnerabilities in code 

AI models that can detect vulnerabilities in software code—and, ideally, propose solutions—could benefit cybersecurity defenders by helping them scan code to find—and fix—vulnerabilities before malicious actors can exploit these. AI tools that could find significantly more vulnerabilities than existing tools, such as static analysis or fuzzing tools, could improve programmers’ ability to run checks over their code before merging it or building it, preventing the deployment of vulnerable code to customers. Using these tools on existing codebases will create more challenges since applications may necessitate asking customers to patch or upgrade their code. These tools might be particularly valuable in low-resource contexts in which developers do not have access to in-house security expertise or security code reviews, such as small businesses, nonprofits, and open-source maintainers. 

Using AI to find vulnerabilities in code is an area of active research effort. For example, the Defense Advance Research Projects Agency (DARPA) and Advanced Research Projects Agency for Health (ARPA-H) are partners in the two-year AI Cyber Challenge (AIxCC) that asks participants to “design novel AI tools and capabilities” to help automate the process of vulnerability detection or other cyber defense activities.3 Right now, the open debate in this area is how good GAI models are at this task and how good they can become. One blog post from a small-business AIxCC semi-finalist said, ”our experiments lead us to believe real-world performance on code analysis tasks may be worse than current benchmarks can measure quantitatively.”4 Some benchmarks do exist, such as the CyberSecEval2 framework,5 developed by Meta—yet evidence offers mixed evaluations. The original authors of the CyberSecEval2 paper found “none” of the large language models (LLMs) “do very well on these challenges.”6 However, follow-on studies from the Project Zero security team at Google reported that they improved the performance of the LLMs through several principles, such as sampling and allowing the models access to tools, while still reporting that “substantial progress is still needed before these tools can have a meaningful impact on the daily work of security researchers.”7


Drivers of opportunity 

  • Domain-specific capability (vulnerability identification): How good AI models are or could be at this task, especially compared to existing capabilities, such as fuzzing or static analysis tools. Any model that can identify vulnerabilities that current tools cannot find would have initial value as an improvement over today’s baseline. Greater efficiency benefits will emerge the more AI models work to minimize both false positives and false negatives, as this will make capabilities more effective and reduce the need for human review of detections. 
  • Integration with existing tools: The more development workflows integrate AI vulnerability-finding tools, such as embedded into build processes or as part of code-hosting platforms like GitHub, the easier it will be for these tools to help detect vulnerabilities before the merge and rollout of code to customers, making bugs easier and less costly to fix. 
  • Cost and availability: Free or low-cost AI models or model-based tools could be particularly useful for organizations or individuals without significant resources dedicated to security reviews, such as for use in small businesses or for open-source software packages. 
  • Education: Ensuring that organizations know how to use vulnerability-finding tools and how to integrate them into their development process can help ensure that, as these tools develop, their benefits flow to defenders and, in particular, to those in less-resourced areas. 

B. Helping developers write more secure code 

Closely related to the question of finding and fixing vulnerabilities in existing code is the idea that AI tools that help developers generate code could help improve the security of that code by ensuring that its suggestions are free from known vulnerabilities. Despite the longstanding knowledge that certain common-class vulnerability patterns are insecure, these have recurred in code over many years.8 Code-generating AI tools could potentially help avoid these patterns, either by training the underlying model to avoid insecure generations, such as through reinforcement learning from human feedback,9 or by filtering model outputs for known insecure code patterns. One factor influencing LLM efficacy in this context is the type of secure coding or vulnerability discovery task assigned. Some flaws require a significant volume of context and might exceed what an LLM can accept. In other instances, model benchmarks could point to a specific code segment to propose mitigations in conjunction with human review. 

Experiments on some of these techniques are already in process; in 2023, GitHub announced that its CoPilot code assistant would now include an “AI-based vulnerability filtering system” to filter out code results containing known insecure code patterns, such as those vulnerable to Structured Query Language (SQL) or path injection or the use of hard-coded credentials.10 These tools could also have their use expanded to propose fixes—at a significantly greater speed than locating them—allowing for the integration of security review tooling based on LLMs into existing human development environments. 

However, one should not assume that AI-generated code will be more secure, especially without further research and investment in this area. (The Risks section of this paper covers an early study indicating that the opposite may well have been true for one generation of LLMs.) Conducting security reviews of AI-generated code will likely require heavy human oversight limiting the throughput from even large-scale LLM deployments for software development. 

The need exists for more evaluation and benchmarking to understand the security properties of AI-generated code, as compared to human code. This would offer developers and organizations defining information on how to integrate AI tools into their workflows, such as identifying contexts in which their use benefits security and pinpointing weaknesses or blind spots where developers should still thoroughly review AI-generated code for security flaws. For example, one could imagine using AI tools capable of identifying and avoiding common insecure patterns, such as a lack of input sanitization, but, consequently, might generate code with more subtle design or logic errors that create new vulnerabilities. 

Drivers of opportunity 

  • Trustworthy AI outputs: A first, vital prerequisite is that AI-generated code improves upon the security of human-written code in relatively consistent ways (and without causing human developers to neglect security concerns in their code more than is currently the case). The security improvements of AI code need not be absolute across contexts—AI-generated code does not need to be better than the best cryptography expert to help the average developer avoid SQL injection attacks. Thus, additional clarity in how and when to trust AI-generated code with respect to security would help ensure its appropriate adoption in different contexts. In addition to being secure, AI code suggestions must, at least, be moderately helpful to developers, if only to buoy wider adoption of the suggestions (and their potential security benefits). 
  • Integration with existing tools: The more that code-generating tools coalesce with integrated development environments (IDEs) and other environments where programmers can use them as part of their development workflows, the more expansive their potential adoption, which will increase tool leverage on other information, such as the broader context of a project to more accurately assess the security implications of the code they generate. 
  • Cost and availability: Many small developers, including open-source software maintainers, may likelier use free or widely available tools rather than expensive proprietary solutions. Ensuring that low-cost model solutions have strong security protections for the code they generate—not just expensive or leading-edge models—could benefit these developers. 
  • Education: Educating developers on the best ways to use AI code-generating tools, as well as how to verify the security of generated code, could also help ensure that these tools roll out in ways that maximize their potential benefits. 

C. Making sense of cybersecurity data

In addition to using the code-analysis and code-generation features of AI to improve the security of software code, another relatively well-developed current use case for AI in cybersecurity is the idea of using AI to help with cybersecurity-relevant data processing. For example, AI tools could help sort through data generated by computer systems, such as system logs, to help identify or investigate cyberattacks by identifying anomalous behavior patterns and indicators. Likewise, AI tools could help process and analyze cyber threat intelligence or information about vulnerability disclosures to help defenders respond to this information and prioritize follow-up actions.11 These systems may incorporate generative AI but might also follow entirely separate AI paradigms, like supervised machine learning. 

Drivers of opportunity 

  • Domain-specific capabilities (anomaly detection): The degree to which AI systems can correctly identify anomalies or other relevant information from system data. Both false negatives and false positives would be harmful in this situation, though false negatives, perhaps more so. 
  • Integration with existing data and tooling: How well can new AI solutions integrate with existing security tooling to access the panoply of data required to do anomaly detection? Is there adequate high-quality available to train these models in the first place? 
  • Cost and availability: Free or low-cost models or tools could be particularly useful for organizations or individuals without significant resources to operate their own security operations center (SOC) teams and similar. 
  • Education: Helping organizations, particularly those with fewer resources, understand how to use and configure these tools can help them harness the efficiencies—and avoid hoodwinking by tools that make big promises but then deliver little in terms of increased security. 

D. Automation of other cybersecurity tasks 

Beyond these well-developed categories, there are other examples of often neglected cybersecurity tasks, which, if improved or eased using AI, would provide benefits to security. One example is the failure to “timely” apply patches and version upgrades to software within a network. These patches and version upgrades often contain important security updates, but many organizations are slow to patch, whether due to resource constraints or negligence. Another related example is consistently upgrading dependencies in software packages to address upstream vulnerabilities. 

Further afield suggestions include the idea of having AI systems, including agents, that can automate longer action sequences in cyber defense, such as systems that can identify an anomaly and then autonomously take action, such as quarantining affected systems. Such autonomy is likely beyond the capabilities of current GAI models, and some researchers have suggested creating “cyber gyms” to help train reinforcement learning agents for these kinds of tasks through trial and error.12

Drivers of opportunity 

  • Trustworthiness: Once operators seek to delegate tasks to AI systems (rather than asking the system to make a suggestion for a human operator to action), it becomes more important to have a very good sense of the accuracy and robustness of the model. For example, an AI patch management system that can modify and control arbitrary elements of a corporate network requires high-level trust protocols that it will not take spurious or destructive actions. This contrasts with many of the other opportunities identified, which envision a human-in-the-loop.  
  • Openness and availability for experimentation: The more different researchers and organizations experiment with models of how to implement AI into the defensive cyber process, the more likely it becomes that a product or service of genuine value might emerge to help use LLMs to automate additional tasks in cybersecurity. 

AI risks in the cybersecurity context 

Broadly, the risks posed by AI in the cybersecurity context fall into at least two categories: risks from malicious misuse (e.g., the use of models to create outputs useful for malicious hacking) and risks to AI users arising from their well-intentioned use (e.g., cyber harms created when models generate incorrect or harmful outputs or take incorrect or harmful actions). Notably, this second category of risks to AI users tightly connects with many of the potential benefits outlined above. 

A. Risks from malicious misuse: Hacking with AI 

The broadest category of malicious misuse risks in the cyber context is the potential for malicious actors—whether high-capability entities like the United States, Israel, or Russia or the most lackadaisical cybercriminal—to use generative AI models to become more efficient or more capable hackers. 

Previous work published by the Cyber Statecraft Initiative on this topic “deconstructs” this risk by breaking “hacking” into constituent activities and examining GAI’s potential utility for assisting with both making capable players better and bringing new malicious entrants into the space.13 It seems possible, and likely, that all kinds of hackers could use GAI tools for activities including reconnaissance or information gathering, as well as assistance with coding and script development. Indeed, OpenAI reported disrupting threat actors who were using their models to conduct research into organizations and techniques and tools, generate and debug scripts, understand publicly available vulnerabilities, and create material for phishing campaigns.14

These risks are already here. What is less clear is whether or not these risks are acceptable and bearable. The OpenAI case shows that GAI is arguably a useful tool for hackers, but not necessarily that it provides a step change in terms of sophistication or capability. Tools like Google, after all, are also a benefit to hackers. The essential question is: where to draw the line? 

This research recommends a few areas where GAI capabilities could create more profound capability improvements for malicious hackers. 

  • Models that can generate content for highly sophisticated social engineering attacks, such as creating deepfakes that impersonate a known figure for the purpose of carrying out an attack. 
  • Models that can identify novel vulnerabilities and develop novel exploits in code at an above-human level. 
  • AI-based “agents” with the ability to string together multiple phases of the cyberattack lifecycle and execute them without explicit human intervention, providing significant benefits in terms of speed and scalability as well as challenging typical means of detecting malicious activity such as looking for connections to a command and control server. 

Thus, the risk that hackers will use GAI is not speculative—it is here. The issue, instead, is how much this usage increases risks to businesses, critical infrastructure companies, government networks, and individuals. 

Drivers of risk 

  • Deepfakes: The ability for GAI systems to generate realistic-looking content that impersonates a human being, which the people interacting with it cannot distinguish or identify as machine-generated.15
  • Domain-specific capabilities (vulnerability identification and exploitation): The ability for models, especially those fine-tuned on relevant datasets and actions, to display above-human level performance at specific high-risk activities, such as identifying novel vulnerabilities. 
  • Domain-specific capabilities (autonomous exploitation): The ability of models to string together and execute complex action sequences—particularly, though not exclusively, in the form of generating and executing code—to compromise an information system end-to-end. 
  • Integration with existing tools: Studies appear to suggest that integrating AI models with tools such as code interpreters can upskill these models,16 which could increase the risk that they can be useful to hackers. 
  • Removal of safeguards: It is very challenging to create blanket safeguards that prevent bad behavior while protecting legitimate use cases, in part because of the similarity between malicious and benign activities. Developers call this the “safety-utility tradeoff.” At the same time, models do currently refuse to comply with overtly malicious requests and appear to be improving in their ability to do so over time—thus, models without any safeguards at all or those fine-tuned for malicious cyber activity could lose even these modest protections. 

B. Risks to AI users 

Risks to AI users depend much more heavily on the context and purposes of the model’s, or its outputs’ use, as well as the type or nature of safeguards and checks implemented within that environment. Some of the key contexts and activities in which AI can create cyber risks to users include the use of AI-generated code, the use of systems where AI agents may have access to user devices and data, and the use of AI in defensive cybersecurity systems. 

B1. Risks of insecure AI-generated code 

In one initial study on the security properties of AI-generated code, published by Stanford, researchers split developers into two groups, gave only one group access to code-assist tools, then observed the developers during the process of solving coding problems and examined the security of the resultant code.17 They found that “participants who had access to an AI assistant … wrote significantly less secure code than those without access.” For example, only 3 percent of programmers in the group with the AI assistant implemented an encryption/decryption function in a way that the researchers categorized as “secure,” compared to 22 percent of programmers working alone who generated a “secure” solution. The researchers surveyed the developers and found that, of the developers using the AI assistant, those who reported placing less subjective trust in the AI assistant were more likely to generate “secure” code. Additionally, the researchers found that code labeled “secure” had, on average, a larger “edit distance,” (e.g., more changes from initial AI-generated code than did “insecure” or “partially secure” solutions). 

While it is possible, and perhaps even likely, that the assistant’s properties have evolved since this point, this example illustrates the need to better understand the security properties of AI-generated code before developers embed it deeply into their workflows. Policymakers can help hold companies to account on this question. 

Drivers of risk 

  • Untrustworthy outputs: The risks from AI-generated code are greatest when the developer is incapable of, or unlikely to, validate the output themselves or if there is no process of human oversight over the generated code. That is, risks become acute when there is a mismatch between the trust that a developer thinks they can place in AI-generated code and the level of trust that is actually appropriate. These levels may vary across contexts, as different kinds of code are more or less security sensitive—for example, deploying a web app has fewer opportunities to go wrong than implementing a cryptographic library—or AI models may be better or worse at generating it securely by virtue of having seen more or fewer examples. These risks necessitate the development of robust benchmarks that measure the security properties of AI-generated code across a variety of contexts. 
  • Misplaced user trust: If users verify the security of generated code themselves and to their own standards, the risks that the code will be insecure significantly lessen. Much of the problem thus stems from users placing unearned trust in model outputs. Yet, pointing the blame finger back at the user is not an appealing path for policy, Moving forward, users will place trust in automated systems, and therefore, it is up to the makers of those systems and policymakers alike to help ensure that the systems are fit to deserve that trust. 

B2. Risks from integrated AI systems with data or system access 

There is a lot of interest in connecting GAI models to environments that give them the tools to automate tasks—rather than feeding output to a human to do a task; leading to more autonomous agents. Such conditions create cybersecurity risks because many AI models are very vulnerable to adversarial attacks that can cause them to do strange and potentially undesirable things, including compromising the security of the system they operate or the data they have access to. 

From stickers on stop signs that can fool computer vision algorithms to “jailbreak” prompts that can convince LLMs to ignore their creator-imposed safeguards,18, 19. it is hard to ensure that AI systems solely do what you want them to do. Many leading models have proven vulnerable to “prompt injections,”20 which allow a user (or a potential attacker) to get around security limitations, including to obtain hidden information. Researchers have already demonstrated that, by embedding hidden text on their webpage, they can manipulate the results of GAI model outputs.21 If users interact with a model that has access to sensitive data, such as a business database or sensitive files on a user’s computer, they might be able to use prompt engineering to trick the model into handing that information over. Or, people could create malicious websites that, when an autonomous agent scrapes them, contain hidden commands to obtain and leak data or damage the machine they are operating. 

These risks grow as developers embed AI systems into higher stakes systems that grant access and authorization to take ever more sensitive actions. Cybersecurity experts have highlighted reliability as a core concern to using AI models as a component of cybersecurity defense, and they stressed the need to deploy models and grant them autonomy in ways proportional to the organizational context in which they operate and the risks associated.22

Drivers of risk 

  • Untrustworthy outputs: Outputs by models that misalign with the goals or needs of their human operators, whether insecure code, harmful outputs as the result of prompt injection, or unsafe decision-making in the cyber context. 
  • Misplaced user (or system) trust: When users or information systems embed a model into a context with more trust and permissions than the model deserves based upon its own reliability. 
  • Increased delegation / lessened supervision: The integration of models into contexts without sufficient, or no, oversight before placing their outputs into “use” (e.g., code merged into a product or security action taken). 

Dual drivers 

The opportunity and risk drivers outlined above are not always diametrically opposed. Were they, it would offer an easy remedy for policy: do more of the “opportunity” drivers and less of the “risk” drivers. Instead, as the next sections illustrate, the close coupling between many of these drivers will challenge policy’s ability to neatly extricate one from the other. 

Domain-specific capabilities 

Particular domain-specific capabilities for AI models would drive both opportunity and risk in the cyber context. For example, the ability to find novel vulnerabilities would benefit defenders by helping them identify weaknesses to patch and malicious actors searching for footholds into software systems. To a lesser degree, the same is true of the general ability that models would have to write complex, correct code—this ability could offer efficiency benefits to developers, whether they are open-source maintainers or ransomware actors. It seems unlikely that these capabilities would advance in ways that only benefit the “good guys.” While model safeguards could help reject obvious malign requests (e.g., ask a model to help them write an urgent email), in the wider cyber context, bad actors are on an endless search for reasonable justifications to test for and seek vulnerabilities in a codebase. No currently known software can develop a foolproof way to see inside its operator’s heart to discern their true intent. Instead, it is likely that policy will simply have to accept these twinned risks, seeking to measure them as they progress and find ways to make it as easy as possible for defenders to implement new technologies in hopes that they can outpace malicious actors. This is an uneasy balance, but it is also one that is deeply familiar in information security. 

Trust and trustworthiness 

Perhaps the single largest driver of AI opportunity in the cybersecurity context is model “trustworthiness”—that is, the degree to which a model or system that integrates AI produces outputs that are accurate, reliable, and “fit for purpose” in a particular application context. For example, if a model can regularly generate code that is secure, free of bugs, and does exactly what the human user intended, it might be trustworthy in this context. 

A model’s trustworthiness almost directly controls the potential productivity benefits it can deliver by dictating whether a human must essentially run “quality control” on model outputs, such as carefully reviewing all generated code or all processed data to ensure the model did not make a mistake or miss an important fact. For example, a completely untrustworthy model saves no time (and may, in fact, waste it) because its work requires manual duplication; theoretically, a perfectly trustworthy model should not need human oversight. In practice, human oversight (whether manual or automated) in some fashion must bridge this imperfect trust. Moreover, it is important that the humans or systems performing this oversight have a good understanding of the level of oversight needed and avoid the complacency of overly trusting the system’s outputs. 

Trust is not a single benchmark but a property dictated by context. Different contexts have distinct requirements, acceptable performance levels, and potential for catastrophic errors. What matters is that the operator has an appropriate way to measure the model’s trustworthiness within a specific task context and determine its respective risk tolerances, then compare both to ensure they align. Policymakers and businesses alike should review the varied levels of criticality for AI application contexts and be specific as to both how to define the properties that a model would need to be trustworthy in each context and how to measure these properties. 

Developing better ways to measure model trustworthiness and make models more trustworthy will, for the most part, unlock opportunity. However, this factor is in the twinned risk section because, undeniably, trusting a model creates risk. The more a model has delegated tasks without stringent oversight, the greater the productivity gains—and the greater the stakes are for its performance and robustness against attack. Notably, in the cybersecurity context, embedding AI systems into broader information systems, while they remain vulnerable to adversarial inputs, creates the risk that these models could become potent vectors for hacking and abusing systems into which they integrate. In this area, it will be vitally important to benchmark and understand AI models’ vulnerability and to develop security systems that embed AI models in ways that account for these risks.23 Without better ways to measure risk before models become embedded into sensitive contexts, there is a risk that AI systems will develop their own kind of “Peter Principle” (i.e.,  AI models embedded into increasingly high-trust situations until they prove they have not earned that trust). 

Openness 

Many of the most acute benefits that GAI systems can provide in cybersecurity will come from using such systems to reduce the labor required to perform security tasks, from auditing code packages to monitoring system logs. The more open innovation there is, the more tools there will be. And the more these tools have accessible price points, the likelier it will be that less-resourced entities will use them. Competition and, in particular, the availability of open-source models can encourage innovation and experimentation to build these tools and keep costs relatively low. Open models can also benefit some of the key questions of trust that are core to AI opportunity and risk: open models are easier to experiment with and customize, making it easier for users and researchers alike to measure the trustworthiness of models in particular contexts and to customize models to meet their specific trust needs. These models are growing ever larger and also more powerful. Cohere AI recently released a 104 billion parameter model through Hugging Face.24 Open models can also contribute to higher levels of trustworthiness, allowing developer-led organizations to validate model behavior under different conditions and tasks with more control of model versions and constraints.  

At the same time, expanded access to capable models—and, in particular, open-source models—may create additional challenges in preventing model misuse. Open models foreclose abuse-preventing tools, such as monitoring application programming interface (API) requests, and allow users to remove safeguards and protections through fine-tuning. The science of safeguards and their relative strengths and weaknesses needs further study to make the case that open models create significantly more “marginal risk” than closed models.25 For example, in the cyber context, even reasonably designed safeguards may be unable to stop hackers from appropriating reasonable outputs, such as email text or scripts seeking more malign ends. However, safeguards may be more impactful when it comes to contexts like embedding watermarks in AI-generated content and similar. As model capabilities and safeguarding techniques advance, the marginal risk posed by open models may increase. 

Asymmetric drivers 

At the same time, there are some factors likely to drive primarily risk or primarily opportunity in the cybersecurity context. These asymmetric drivers of risk and opportunity make promising areas for policy intervention. 

Risk: Deepfakes and impersonation 

There are few legitimate reasons why AI models should need to generate content that imitates a person (especially an actual person) without appropriate disclosures that this content is not real. This is true across images, video, and voice recordings. Policy could knock out a series of easy wins by focusing on requiring disclosures and making AI-generated media easier to identify. Already, a bevy of proposed state initiatives exist, which, if enacted, will mandate disclosing AI-generated media in contexts from political advertising to robocalls,26  and federal lawmakers could unify these requirements with legislation to apply them consistently whenever consumers interact with advertising or businesses. Laws will not stop criminals, of course—for that, the government may need to invest in technical research to embed watermarks into AI-generated content and to help electronic communication carriers like voice and video calling implement systems for detecting faked content. This work will not be easy, requiring novel research and development as well as implementation across a variety of parties. Nonetheless, the government is the best-positioned actor to coordinate and drive this forward. 

Opportunity: Education 

Another clear opportunity is investing in ways to educate different users who will interact with and make decisions about AI—from business leaders to developers—about how to use AI in responsible and reasonable ways. This kind of education can increase the uptake of AI, where it can be helpful, while also providing an opportunity to prime these users to consider specific kinds of risks, from the need to review AI-generated code to the security risks of embedding AI systems that might be vulnerable to prompt injection. 

Opportunity: Measuring trustworthiness 

The more that operators have a grounded sense of models’ strengths and weaknesses, the more they can build applications atop them that do not run the risks of strange and unexpected failures. Policy can help steer and incentivize the development of ways to measure relevant aspects of model trustworthiness, such as a model’s accuracy (best defined in a specific context), its security and susceptibility to adversarial inputs, and the degree to which its decisions allow audits or reviews after the fact. Better measurements will unlock better usage with fewer risks. And they will enable the government to step in and demand clear standards for certain high-risk applications. 

Drivers of risk and opportunity in context 

Many of the drivers of risk and opportunity draw from the unique characteristics of this moment in AI. Understanding the story of how we got to this moment, alongside identifying some specific meta-trends that characterize it, can help policymakers comprehend the drivers of risk and opportunity as well as how they are likely to change in the future. 

Deeply unsupervised 

The first trend is the rise of unsupervised learning, alongside its resulting highly capable generalist models. The field of AI has seen the rise and fall of multiple different paradigms throughout its lifetime, with generative AI representing the next instantiation of a longer-running trend in the field toward systems that learn to make sense of data themselves using patterns and rules that are increasingly opaque to their creators. 

Many early attempts to build artificially intelligent systems focused on programming complex, pre-determined rules into computer systems. These systems could be surprisingly capable: in 1966, the first “chatterbot,” Eliza, used simple language-based rules to emulate responses from a mock therapist, with its creator finding that “some subjects have been very hard to convince that Eliza (with its present script) is not human.”27 And, in 1997, the computer Deep Blue outplayed world chess champion Garry Kasparov using brute-force computation and a complex set of rules provided by chess experts.28 Yet, these systems lacked at least one key characteristic of intelligence: the ability to learn. 

Decades before these rule-based approaches, research into how the human brain works through the interconnection and firing of neurons inspired the invention of another paradigm: neural networks.29 The weights in neural networks—updated over time by an algorithm that seeks to reduce the error between the network’s prediction and reality—allow neural networks to learn rules, patterns, and relationships not explicitly specified by their creators. While neural networks fell out of favor during a long “AI Winter,” they began to recur in the nascent field of machine learning, which focused on developing statistical algorithms that could learn to make predictions from data. 

Initially, machine learning focused primarily on supervised learning, a paradigm in which a model tries to learn relationships between input data (such as images or numerical and financial data) and output labels (such as names of items in an image or future price projections). Supervised learning with increasingly deep neural networks proved very successful for tasks like image classification, predictive analyses, spam detection, and many other tools developed during the 2000s and 2010s. 

In contrast, current generative AI systems receive their training, at least in large part, through unsupervised learning, a different paradigm in which a model reviews an immense amount of unlabeled data, such as raw text, and learns to cluster or predict that data without explicit human-provided labels (or target predictions). LLMs, like OpenAI’s Generative Pre-trained Transformer (GPT) series, are huge neural networks trained on trillions upon trillions of words of text data, much of which comes from scraped internet sites and digital books.30 Interestingly, these models still learn by making predictions and receiving error signals to correct their prediction functions—but instead of learning to predict human-generated labels, they learn to predict patterns and structure in human-generated data (text) itself. 

Unsupervised learning has increased the capacity of models, producing technologies, like ChatGPT, that can and have dazzled users and researchers alike with their capabilities. It has also created systems that are more challenging for developers, researchers, policymakers, and users to understand. Rules-based systems were definitionally transparent. Deep learning was perhaps the first indication that subsequent AI systems might bring opaque internal logic that defies easy interpretation. However, supervised approaches have still provided some clear ways to evaluate model performance within a specific domain. New unsupervised models are challenging to interpret and evaluate. Their capabilities emerge through testing and scale rather than explicit design.31 The emergence of these models preceded the development of empirical ways to test their capabilities across many of the domains they likely have skills. Harnessing the opportunity and avoiding the risks of these highly general models will require developing new ways to think about model explainability and new ways to evaluate model capabilities across the varied tasks and contexts, where their use is not only probable but also possible.32

Ravenous demand for compute and data 

The second trend focuses on the ways in which the intensive compute and data needs of the latest generation of AI model development have made current systems highly proximate to concentrated power in the hands of large technology companies. 

Current leading-edge models are big.33 Size defines the computing costs associated with training a model, namely, the size of its training dataset and the size of the model itself (often measured as the number of “parameters”). Both of these have grown ever larger and the compute required to train these massive models is expensive.34 At present, the well-capitalized and semi-commercial players (e.g., OpenAI, Meta, and Google) build most of the leading models. This creates a different paradigm than that of previous iterations of AI or machine learning systems, which more often emerged from research and academic settings. The computational and data costs of large-model development have tied the evolution of AI models to other existing technology infrastructures, especially cloud computing, with major providers to deliver, in part, the required compute (e.g., the Amazon and Microsoft partnerships with leading generative AI labs).35 Likewise, access to text data for training models has become a point of leverage. Sites like Reddit and Twitter that host lots of public text have begun charging for API access to data,36 as users question whether their technology providers take advantage of private data to train AI models (major model providers say they use only public data).37

The pressures for large labs to rapidly commercialize these systems and to recoup their investments may drive both opportunity and risk—opportunity because there will be well-capitalized machines seeking to build functional applications and use cases for these models; risk because these companies will face tremendous pressure to create product offerings from these models, regardless of their shortcomings. Closed and for-profit paradigms may make it harder for independent researchers and outsiders to access models to evaluate them and expose their weaknesses—while large labs have definitely allowed some level of access,38 for which they should be commended, it is hard to know exactly what the limits of this access and of researchers’ ability to publicly report adverse findings are. While open-source models help bridge some of this gap, this paradigm only works if open-source models are at relative parity with closed-source ones, which may not have guarantees.39

New stakeholders 

The third trend—and an important caveat to the second trend—is how the popularity and accessibility of natural language interfaces for AI models have brought a new wave of AI stakeholders into the ecosystem. Even people with no technical background can easily interact with tools like ChatGPT, Bard, and the Bing chatbot through prompts written in English (or other languages) rather than computer code. Consumers, hacker-builders, entrepreneurs, and large companies—alike—expand and help develop new potential use cases for AI. Significant application development activity is also happening based on open-source and publicly available models, led by platforms like Hugging Face and the decision by Meta to publicly release its Llama models. This distributed innovation environment creates the potential for AI’s benefits to disperse more widely and in a more decentralized way than were innovations, such as the large internet platforms of the 2000s. At the same time, this decentralization will increase the challenge for regulators seeking to set standards around the development and use of AI applications, in much the same way as regulators have struggled to define functional and universal standards for software security because of software’s heterogeneous and decentralized nature. 

Conclusions: Whose risks, whose opportunity? 

Advances in AI will bring both opportunity and risk. The key question for policymakers is not how to get only opportunity and no risk—this seems all but impossible. Instead, it is one of recognizing and seeking to balance who must deal with each. Models that can write more trustworthy and reliable code will help open-source maintainers and other organizations better shore up security—and help novice hackers write scripts and tools. Both defenders and cybercriminals will use models that can find vulnerabilities. Models that integrate into workflows entrusted to make decisions can deliver the benefits of machine speed and scale, while creating risks because humans can no longer perfectly oversee and interpret their decisions. 

With many of these cases, such as vulnerability hunting and coding, policymakers’ best option may simply be to try to encourage enterprises to build and adopt these tools into their workflows and development processes faster than they end up as common tools for malicious hackers. For certain other cases, as with deepfake-based impersonations, it may be possible to push model developers to implement tailored protections that can asymmetrically reduce their abuse potential while preserving their benefits. And, in general, policymakers can seek to develop incentives and support for the development of best practices, tools, and standards for AI assurance, to encourage enterprises and organizations to apply appropriate scrutiny in their adoption of AI, and to hold them to account when they fail to do so. 

Policymakers might also consider ways to shift more of the costs of safely integrating AI – ways of measuring trust and mitigating risk—onto the makers of these systems. The history of the debate over software liability illustrates the peril of allowing technology vendors to reap the profits from selling technology without facing any consequences when that technology proves unfit for the purpose for which they sold it.40 The debate over software liability has raged for decades.41 Maybe the advent of AI provides an opportunity to adopt a new paradigm a little sooner. 

The balance of risk and opportunity for the end users of technology should be a primary concern for policymakers; how the market and policy equip cybersecurity defenders will play a significant role in determining that balance. Thus, there remain plenty of opportunities (and risks) for policymakers to evaluate in these next formative years of AI policy. 

About the authors

Maia Hamin is currently serving an assignment under the Intergovernmental Personnel Act at the US AI Safety Institute within the National Institute of Standards and Technology (NIST). She is on leave from the Cyber Statecraft Initiative, where she held the position of associate director with the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. Hamin’s contributions to this work predate her NIST assignment, and the views expressed in this paper are not those of the AI Safety Institute.

Jennifer Lin is a former Young Global Professional with the Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. During her time with the team, she was a sophomore at Stanford University double-majoring in political science and symbolic systems, with a specialization in artificial intelligence.

Trey Herr is senior director of the Cyber Statecraft Initiative (CSI), part of the Atlantic Council Technology Programs, and assistant professor of global security and policy at American University’s School of International Service.

Acknowledgments 

Thank you to the CSI team for support on this project as well as Charlette Goth-Sosa and Donald Partkya for editing and production support. Thank you also to several reviewers at different stages of drafting including Harriet Farlow, Chris Wysopal, Kevin Klyman, and others who wish to remain anonymous. 


The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Assistants API Overview: How Assistants work,” Open AI Platform, accessed June 30, 2024, https://platform.openai.com/docs/assistants/overview.,
2    Patrick Lewis et al, “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” arXiv, April 12, 2021 [last revised], https://doi.org/10.48550/arXiv.2005.11401.
3    Advanced Research Projects Agency for Health (ARPA-H), “ARPA-H Joins DARPA’s AI Cyber Challenge to Safeguard Nation’s Health Care Infrastructure from Cyberattacks,” March 21, 2024, https://arpa-h.gov/news-and-events/arpa-h-joins-darpas-ai-cyber-challenge; AI Cyber Challenge (AIxCC), accessed June 30, 2024, https://aicyberchallenge.com/.
4    “Zellic Wins $1M From DARPA in the AI Cyber Challenge,” Zellic, April 4, 2024. https://www.zellic.io/blog/zellic-darpa-aixcc/.
5    Manish Bhatt et al., “CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models,” arXiv, April 19, 2024. http://arxiv.org/abs/2404.13161.
6    Bhatt et al., “CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite.”
7    Sergei Glazunov and Mark Brand, “Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models,” Google Project Zero (blog), June 20, 2024, https://googleprojectzero.blogspot.com/2024/06/project-naptime.html.
8    “Secure by Design Pledge,” US Cybersecurity and Infrastructure Security Agency (CISA), accessed June 30, 2024, https://www.cisa.gov/securebydesign/pledge; Isabella Wright and Maia Hamin, “‘Reasonable’ Cybersecurity in Forty-Seven Cases: The Federal Trade Commission’s Enforcement Actions Against Unfair and Deceptive Cyber Practices.” Cyber Statecraft Initiative, June 12, 2024. https://dfrlab.org/2024/06/12/forty-seven-cases-ftc-cyber/.
9    AI models, which receive human feedback on their predictions, learn to generate outputs that receive more favorable feedback. See Paul Christiano et al., “Deep Reinforcement Learning from Human Preferences,” arXiv, February 17, 2023, http://arxiv.org/abs/1706.03741.
10    Anthony Bartolo, “GitHub Copilot Update: New AI Model That Also Filters Out Security Vulnerabilities,” Microsoft (blog), Feb 16, 2023, https://techcommunity.microsoft.com/t5/educator-developer-blog/github-copilot-update-new-ai-model-that-also-filters-out/ba-p/3743238.
11    “CISA Artificial Intelligence Use Cases,” US Cybersecurity and Infrastructure Security Agency (CISA), accessed June 30, 2024, https://www.cisa.gov/ai/cisa-use-cases.
12    Andrew Lohn, Anna Knack, Ant Burke, and Krystal Jackson, “Autonomous Cyber Defense: A Roadmap from Lab to Ops,” Center for Security and Emerging Technology (CSET), June 2023, https://cset.georgetown.edu/publication/autonomous-cyber-defense/.
13    Maia Hamin and Stewart Scott, “Hacking with AI,” Cyber Statecraft Initiative, February 15, 2024, https://dfrlab.org/2024/02/15/hacking-with-ai/.
14    “Disrupting Malicious Uses of AI by State-Affiliated Threat Actors,” OpenAI, February 14, 2024, https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/
15    Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova, “Truth, Lies, and Automation,” Center for Security and Emerging Technology (CSET), May 2021, https://cset.georgetown.edu/publication/truth-lies-and-automation/
16    Glazunov and Brand, “Project Naptime: Evaluating Offensive Security Capabilities.”
17    Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh, “Do Users Write More Insecure Code with AI Assistants?” In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, (November 2023), 2785–99, https://doi.org/10.1145/3576915.3623157.
18    Evan Ackerman, “Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms,” IEEE Spectrum, August 2017, https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms.
19    Melissa Heikkilä, “Three Ways AI Chatbots Are a Security Disaster,” MIT Technology Review, April 3, 2023, https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/
20    Bhatt et al., “CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite.”
21    Arvind Narayanan (@random_walker), “While Playing around with Hooking up GPT-4 to the Internet, I Asked It about Myself… and Had an Absolute WTF Moment before Realizing That I Wrote a Very Special Secret Message to Bing When Sydney Came out and Then Forgot All about It. Indirect Prompt Injection Is Gonna Be WILD Https://T.Co/5Rh1RdMdcV,” X, formerly Twitter, March 18, 2023, 10:50 p.m., https://x.com/random_walker/status/1636923058370891778.
22    Anna Knack and Ant Burke, “Autonomous Cyber Defence: Authorized Bounds for Autonomous Agents,” Alan Turing Institute, May 2024, https://cetas.turing.ac.uk/sites/default/files/2024-05/cetas_briefing_paper_-_autonomous_cyber_defence_-_authorised_bounds_for_autonomous_agents.pdf
23    Caleb Sima, “Demystifing LLMs and Threats.” Csima (blog), August 15, 2023, https://medium.com/csima/demystifing-llms-and-threats-4832ab9515f9.
24    Cohere 4 AI, “Model Card for Cohere 4 AI Commanr R+”, May 23, 2024, https://huggingface.co/CohereForAI/c4ai-command-r-plus.
25    Sayash Kapoor et al., “On the Societal Impact of Open Foundation Models,” February 27, 2024, https://arxiv.org/pdf/2403.07918v1.
26    Bill Kramer, “Transparency in the Age of AI: The Role of Mandatory Disclosures,” Multistate, January 19, 2024. https://www.multistate.ai/updates/vol-10.
27    Ben Tarnoff, “Weizenbaum’s Nightmares: How the Inventor of the First Chatbot Turned against AI,” Guardian, July 25, 2023, https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai.
28    IBM, “Deep Blue,” accessed June 30, 2024, https://www.ibm.com/history/deep-blue.
29    Warren S McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5 (1943), https://home.csulb.edu/~cwallis/382/readings/482/mccolloch.logical.calculus.ideas.1943.pdf.
30    Dennis Layton, “ChatGPT – Show Me the Data Sources,” Medium (blog), January 30, 2023, https://medium.com/@dlaytonj2/chatgpt-show-me-the-data-sources-11e9433d57e8.
31    Jason Wei et al., “Emergent Abilities of Large Language Models,” arXiv, October 26, 2022, https://doi.org/10.48550/arXiv.2206.07682.
32    Leilani H. Gilpin et al., “Explaining Explanations: An Overview of Interpretability of Machine Learning,” arXiv, February 3, 2019, http://arxiv.org/abs/1806.00069
33    Anil George, “Visualizing Size of Large Language Models,” Medium (blog), August 1, 2023, https://medium.com/@georgeanil/visualizing-size-of-large-language-models-ec576caa5557.
34    Jaime Sevilla et al., “Compute Trends Across Three Eras of Machine Learning,” 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, (2022), 1–8, https://doi.org/10.1109/IJCNN55064.2022.9891914.
35    Amazon Staff, “Amazon and Anthropic Deepen Their Shared Commitment to Advancing Generative AI,” March 27, 2024. https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment; “Microsoft and OpenAI Extend Partnership,” Official Microsoft Blog, January 23, 2023, https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/.
36    Mike Isaac, “Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems,” New York Times, April 18, 2023, https://www.nytimes.com/2023/04/18/technology/reddit-ai-openai-google.html.
37    Eli Tan, “When the Terms of Service Change to Make Way for A.I. Training,” New York Times, June 26, 2024, https://www.nytimes.com/2024/06/26/technology/terms-service-ai-training.html
38    “OpenAI Red Teaming Network,” accessed June 30, 2024, https://openai.com/index/red-teaming-network/.
39    Xiao Liu et al., “AgentBench: Evaluating LLMs as Agents.” arXiv, October 25, 2023, http://arxiv.org/abs/2308.03688; “LMSys Chatbot Arena Leaderboard,” Hugging Face, accessed June 30, 2024, https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard; “SEAL Leaderboards,” Scale, accessed June 30, 2024, https://scale.com/leaderboard
40    Bruce Schneier, “Liability Changes Everything,” November 2003, https://www.schneier.com/essays/archives/2003/11/liability_changes_ev.html.
41    Maia Hamin, Sara Ann Brackett, and Trey Herr, “Design Questions in the Software Liability Debate,” Cyber Statecraft Initiative, January 16, 2024, https://dfrlab.org/2024/01/16/design-questions-in-the-software-liability-debate/.

The post AI in cyber and software security:  What’s driving opportunities and risks? appeared first on Atlantic Council.

]]>
NATO must recognize the potential of open-source intelligence https://www.atlanticcouncil.org/blogs/new-atlanticist/nato-must-recognize-the-potential-of-open-source-intelligence/ Tue, 13 Aug 2024 19:02:28 +0000 https://www.atlanticcouncil.org/?p=780661 By taking steps to use OSINT more effectively, NATO can preempt, deter, and defeat its adversaries’ efforts to expand their influence and undermine the security of member states.

The post NATO must recognize the potential of open-source intelligence appeared first on Atlantic Council.

]]>
Air Marshal Sir Christopher Harper is a former UK military representative to NATO and served as director general of the NATO International Military Staff from 2013 to 2016. He is a nonresident senior fellow with the Transatlantic Security Initiative in the Atlantic Council’s Scowcroft Center for Strategy and Security and an adviser to companies, including Accenture and Adarga, which provide AI tools for processing open-source information, including for public-sector clients.

Robert Bassett Cross is a former British Army officer and the founder and CEO of the UK-headquartered AI software developer Adarga. He is a nonresident senior fellow at the Forward Defense practice of the Atlantic Council’s Scowcroft Center for Strategy and Security and an honorary research fellow at the University of Exeter’s Strategy and Security Institute.


Writing in 1946, just a few years before NATO was founded, Director of the US Office of Strategic Services Bill Donovan knew precisely how valuable publicly available information could be.

“[E]ven a regimented press,” he wrote, “will again and again betray the national interest to a painstaking observer . . . Pamphlets, periodicals, scientific journals are mines of intelligence.”

Today, seventy-five years after the Alliance was formed, such open-source intelligence (OSINT) is more important—and more powerful—than ever. However, underinvestment in OSINT capabilities and a culture favoring classified data currently hold back member states’ intelligence-collection potential. To fully utilize the available technology to detect threats from adversaries, NATO member states must overcome these barriers to embrace open-source intelligence enabled by artificial intelligence (AI).

Understanding the threat landscape

OSINT can help leaders get a fast, up-to-date understanding of their operating environment. If you want to know who’s doing what, where, and when, then an open-source specialist can quickly tell you.

If, for example, you want to find out who’s jamming GPS systems in the Baltic region, the relevant data isn’t hard to come by. Similarly, OSINT analysts can provide insights into issues ranging from the effectiveness of Iran’s attack on Israel (and the Israeli response) to China’s current role in fueling the Russian war machine. 

In recent years, it has become increasingly clear that, in addition to insight into current and recent events, OSINT can help leaders forecast what an adversary might be planning to do weeks, months, or even years from now.

By exploiting OSINT more fully and by integrating it into the wider intelligence cycle, NATO can preempt, deter, and defeat its adversaries’ efforts to expand their influence and undermine the security of member states. Here are several ways that OSINT can be used:

  1. Across the physical domains of land, air, sea, and space, NATO can exploit publicly and commercially available data to explore an adversary’s order of battle and—more importantly—monitor changes in the strength and disposition of its military units and formations to infer its intent.
  2. In the cyber domain, NATO can leverage commercially available information to detect and counter the penetration of networks governing critical infrastructure, as well as those related to research organizations, academic institutions, and technology developers.
  3. In the information space, OSINT can help NATO identify, understand, and counter influence campaigns, specifically when it comes to the detection and attribution of disinformation and misinformation.
  4. NATO can draw on vast swaths of open-source data to infer long-term strategic intent. Every subtle change to a government’s policies, every adjustment to its economic positioning and investment strategy, every new law and regulation it enacts, every new treaty and trade agreement—all of these can help the Alliance reverse engineer an adversary’s confidential playbooks.

Given the vast quantity, complexity, and diversity of the data, it is vital that NATO employs AI to extract the maximum value from it—to enhance analysts’ abilities, accelerate the analysis cycle, and build a reliable, contextual understanding of what Donovan called “the strategy developing silently behind the mask.”

The barriers to OSINT adoption

While AI is, of course, an emerging technology, its utility is already being realized across industries and sectors outside defense. From corporate intelligence and advisory services to finance and media, more and more private-sector organizations are using AI to make sense of the information environment, drawing on an ever-expanding range of sources to manage risk, identify opportunities, and adapt to geopolitical volatility.

However, the barriers to its widespread adoption and effective exploitation in political and military circles remain considerable. A paper published in 2022 by the Royal United Services Institute (RUSI), in collaboration with the Centre for Emerging Technology and Security and the Alan Turing Institute, identified three in particular.

First, there are tradecraft barriers relating to the methodologies governing everything from the analysis of publicly available information to the evaluation and dissemination of the resulting intelligence. Second, there are resourcing barriers stemming from underinvestment in the requisite tools, technologies, data sets, and training.

The third barrier identified by the RUSI authors—and the most daunting one—is cultural. Presented with so much open-source data, analysts and decision makers tend to favor classified information and internal data sets. These sources and insights are easier to trust and are imbued with what the authors call “the perceived power of the ‘secret’ label.” 

Speaking at the Eurosatory exhibition in Paris in June, US Major General Matthew Van Wagenen, deputy chief of staff for operations at NATO, confirmed how great this cultural barrier is. Up to 90 percent of “what Western militaries are looking for,” he said, can be derived from open sources:

This is a revolution in how we look at information. The ways of discerning information through classical means and techniques, tactics, and procedures that militaries have been adapted to—that’s really an old model of doing business. The new open source that’s out there right now, and the speed of information and relevance of information is coming, this is how things need to be looked at.

It is reasonable to believe that the tradecraft and resourcing barriers can be overcome. Methodologies are evolving swiftly, as are the requisite technologies. In fact, many of the tools NATO needs to capitalize on OSINT already exist. New AI applications are coming online almost every week. But if NATO fails to overcome the cultural barrier, it risks going into the next conflict underinformed and ill-prepared.

How AI-enabled OSINT can earn NATO leaders’ confidence

The cultural barrier to AI-enabled OSINT cannot be surmounted simply by decree or directive. Nor can it be overcome by intelligence professionals alone. The technology—and the discipline—must earn the justified confidence of civilian leaders and military commanders across the international staff, the military committee, and the supporting agencies. This could happen if AI-enabled OSINT were applied first to the simplest intelligence-gathering tasks before being applied to the most complex. To borrow the terminology made famous by former US Defense Secretary Donald Rumsfeld, NATO should apply the discipline to corroborating “known knowns,” resolving “known unknowns,” and surfacing “unknown unknowns.”

Corroborating “known knowns”: NATO should start by recognizing where the skills of the human analyst currently outperform even the most sophisticated models, and where AI can best be applied to elevate these skills. This means asking the right kind of questions, and employing OSINT to corroborate what is already known and to triangulate insights gathered from well-established secret sources. In this way, NATO can begin to overcome the skepticism that’s too often associated with publicly available information and OSINT. 

Resolving “known unknowns”: With so much data to draw on, it is essential that NATO uses AI to help collate, process, and (where necessary) translate that data so it is ready for analysts to interpret. If AI-enabled OSINT can prove useful to intelligence professionals in this capacity, those professionals may be more willing to apply it to the most complex and valuable intelligence tasks of all—surfacing risks and opportunities that civilian and military leaders would otherwise struggle to identify.

Surfacing “unknown unknowns”: Perhaps the greatest contribution that AI can make to the intelligence-gathering discipline is identifying patterns and connections that are invisible to the human eye. Dedicated, AI-powered information-intelligence applications that synthesize publicly available information with proprietary data can help analysts and decision makers tease out insights they would otherwise miss.

This combination of publicly available information with classified data will enable NATO analysts to give military and political leaders a uniquely rich, nuanced, and highly contextualized understanding of the operating environment. Decision makers at every level will be able to examine intelligence from every angle, and apply their experience and imagination to infer an adversary’s intentions based on the interplay of evidence.

The critical need for human-machine teaming

The necessary tools and methodologies exist. What’s missing is the determination to get these tools into users’ hands, to supply the requisite training, and to capitalize on the integrated output derived from all sources of intelligence, open-source and otherwise.

OSINT is becoming known among some intelligence professionals as “the intelligence of first resort.” Compared with clandestine methods of information gathering and analysis, OSINT is fast, low-cost, and low-risk. But if it can be combined with those same methods then NATO’s analysts and leadership will have an enduring competitive edge, with access to the kind of strategic information that would likely be, in Bill Donovan’s words, “of determining influence in modern war.”


NATO’s seventy-fifth anniversary is a milestone in a remarkable story of reinvention, adaptation, and unity. However, as the Alliance seeks to secure its future for the next seventy-five years, it faces the revanchism of old rivals, escalating strategic competition, and uncertainties over the future of the rules-based international order.

With partners and allies turning attention from celebrations to challenges, the Atlantic Council’s Transatlantic Security Initiative invited contributors to engage with the most pressing concerns ahead of the historic Washington summit and chart a path for the Alliance’s future. This series will feature seven essays focused on concrete issues that NATO must address at the Washington summit and five essays that examine longer-term challenges the Alliance must confront to ensure transatlantic security.

The post NATO must recognize the potential of open-source intelligence appeared first on Atlantic Council.

]]>
Sailing through the spyglass: The strategic advantages of blue OSINT, ubiquitous sensor networks, and deception https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/sailing-through-the-spyglass-the-strategic-advantages-of-blue-osint-ubiquitous-sensor-networks-and-deception/ Thu, 08 Aug 2024 10:43:41 +0000 https://www.atlanticcouncil.org/?p=781627 In today’s technologically enabled world, the movements of every vessel—from nimble fishing boats to colossal aircraft carriers—can be meticulously tracked by a massive network of satellites and sensors. With every ripple on the ocean’s surface under scrutiny, surprise naval maneuvers will soon be relics of the past.

The post Sailing through the spyglass: The strategic advantages of blue OSINT, ubiquitous sensor networks, and deception appeared first on Atlantic Council.

]]>
In today’s technologically enabled world, the movements of every vessel—from nimble fishing boats to colossal aircraft carriers—can be meticulously tracked by a massive network of satellites and sensors. With every ripple on the ocean’s surface under scrutiny, surprise naval maneuvers will soon be relics of the past. The vast expanse of the world’s oceans will no longer be shrouded in mystery, but illuminated by data streams flowing from millions of eyes and ears aware of every movement from space to seabed.

Open-source intelligence (OSINT) refers to intelligence derived exclusively from publicly or commercially available information that addresses specific intelligence priorities, requirements, or gaps. OSINT encompasses a wide range of sources, including public records, news media, libraries, social media platforms, images, videos, websites, and even the dark web. Commercial technical collection and imagery satellites also provide valuable open-source data. The power of OSINT lies in its ability to provide meaningful, actionable intelligence from diverse and readily available sources.

Thanks to technological advances, OSINT can provide early warning signs of a conflict to come long before it actually breaks out. On land, the proliferation of inexpensive and ubiquitous sensor networks has rendered battlefields almost transparent, making surprise maneuvers more difficult. Through open-source data from smartphones and satellites, persistent OSINT provides early warning of mobilization and other key indicators of military maneuvers. This capability is further augmented by artificial intelligence (AI)-enhanced reconnaissance and real-time data analysis, which have proven remarkably effective in modern conflicts including in Ukraine, Azerbaijan, Gaza and Israel, and Sudan. As this paradigm extends to maritime operations, it brings unique challenges and characteristics compared to land operations.

As technology races forward, Blue OSINT stands out as a key tool in the arsenal of contemporary naval warfare during global great-power competition. Blue OSINT harnesses data from commercial satellites, social media, and other publicly available sources to specifically enhance maritime domain awareness, identify emerging threats, and inform strategic decisions.

The current state of Blue OSINT across the spectrum of conflict points to an accelerating technology-driven evolution enabling maritime security and sea-control missions. The US Navy (USN) can enhance Blue OSINT collection with its own commercially procured sensor networks and bespoke uncrewed systems to shape operational environments, prevent and resolve conflicts, and ensure accessibility of sea lines of communications.

Commercially procured sensors span a wide array of technologies, including sonar and acoustic sensors, as well as video and seismic devices that are utilized to detect activities in strategic locations. These sensors can function independently or operate from uncrewed systems, providing flexibility and adaptability in various maritime operations. For instance, uncrewed aerial systems (UAS) equipped with high-resolution cameras and radar can deliver persistent surveillance over expansive oceanic areas, while uncrewed underwater vehicles (UUVs) with sonar capabilities can monitor subsea activities, such as submarine movements and underwater installations. These uncrewed platforms enable the continuous collection of critical data, enhancing the Navy’s situational awareness and operational readiness without putting sailors at risk.

For the US Navy to best support the joint force and maintain its strategic edge, it must integrate ubiquitous sensor networks and Blue OSINT into naval strategies adapted for tomorrow’s increasingly complex maritime environment. The Navy’s multiyear Project Overmatch is a good start to developing its “network of networks” and contributing to the Joint All-Domain Command and Control (JADC2) program.

With escalating tensions in the South China Sea, conventional forces are stretched thin and face asymmetric threats such as the People’s Liberation Army Navy (PLAN)’s undersea sensing arrays and China’s maritime militia forces. Integrating Blue OSINT and sensor networks into the Navy’s strategies complements traditional naval power, while allowing intelligence missions to be conducted at lower risk and cost. Moreover, the open-source nature of this information enhances the Navy’s ability to share information and collaborate with allies and partners while bypassing cumbersome security classification issues. By relying on easily shareable information, the Navy can better synchronize efforts with partner navies, making command of the sea a more coordinated and viable endeavor.

The impact of evolving open-source intelligence on warfare

Feature OSINT Traditional Intelligence
Source of data Commercial satellites, social media, public sources HUMINT, SIGINT, classified sources
Coverage Global, real-time updates, highly accessible Selective, based on specific operational requirements
Cost Low cost, leveraging existing commercial infrastructure High cost, involving extensive human and technical resources
Risk Low risk, minimal direct exposure Higher risk, involves clandestine operations
Data volume Extremely high, necessitates AI and advanced analytics Moderate to high, manageable with traditional methods
Ease of sharing High, fewer classification issues Low, often restricted by security classifications
Data warning Effective, provides pre-conflict indicators Effective, but often limited by operational scope
Deception tactics Requires advanced techniques to counteract Relies on traditional counterintelligence and technical methods
Collaboration Enhances collaboration with allies using open data Limited, restricted sharing due to classification
Operational impact Supports continuous monitoring and quick response Supports deep, targeted insights into adversaries

The table above provides a comparison between OSINT and traditional intelligence methods, highlighting the strengths and weaknesses of each approach. OSINT offers global, real-time updates at a lower cost by leveraging existing commercial infrastructure. This approach presents a lower risk, as it involves minimal direct exposure and facilitates easier information sharing due to fewer classification issues.

On the other hand, traditional intelligence methods such as human intelligence (HUMINT) and signals intelligence (SIGINT) provide selective, targeted insights based on specific operational requirements. These methods often involve higher costs and risks due to the need for extensive human and technical resources, as well as the nature of clandestine operations. While traditional intelligence can offer deep, targeted insights, it is often limited by operational scope and security classification issues, making information sharing more challenging.

In the maritime domain, these distinctions are particularly significant. The concept of Blue OSINT integrates these principles specifically for naval operations, emphasizing the need for continuous monitoring and rapid-response capabilities.

Blue OSINT and persistent maritime monitoring

In the pre-conflict stage, global satellite coverage and social media provide a wealth of data that can map maritime activity with unprecedented detail. Nonprofit organizations like Global Fishing Watch use commercial satellite constellations to track ships and monitor maritime activity. Increased affordability and accessibility of satellite technology have enabled nongovernmental and commercial entities to contribute to maritime domain awareness in new ways. For instance, maritime radar emissions—once the exclusive domain of military and intelligence satellites—are now easily observable and “tweetable,” allowing for vessel identification to be accomplished more easily when actors execute deceptive techniques. Similarly, platforms like X (formerly Twitter) host numerous “ship spotting” accounts, where enthusiasts post photos and updates of vessels passing through strategic chokepoints and major straits, further enriching the available data.

Through persistent monitoring and large-scale data analysis, Blue OSINT can be used to significantly mitigate the challenge of monitoring large exclusive economic zones (EEZs). It offers a cost-effective alternative to traditional patrols, allowing these navies to adopt a more targeted approach when deploying their limited resources. By embracing Blue OSINT, naval forces can enhance their surveillance and response capabilities without a heavy financial burden, ensuring that these forces remain agile and effective in their maritime operations. Additionally, data streams from ubiquitous sensor networks can be coupled with Blue OSINT collection to give naval intelligence experts near-endless amounts of data in support of complex reconnaissance operations, without placing sailors and special operators at increased risk to collect it.

In addition to myriad opportunities for intelligence collection, using Blue OSINT presents technological challenges for the US Navy. The sheer volume of data generated by ubiquitous sensor networks and Blue OSINT tools necessitate substantial investments in software and analytic tools to manage and interpret this information effectively. Intelligence professionals must sift through endless amounts of data to identify actionable insights. Even the most skilled analysts need software and computer processing that can help organize and parse raw data.

To address these challenges, the US Navy and other maritime forces are ramping up investments in commercially procured sensor networks and cutting-edge analytic tools. In June 2024, the National Geospatial-Intelligence Agency issued its first-ever commercial solicitation for unclassified technology to help track illicit fishing in the Pacific. Such investments aim to access, exploit, and process the massive amounts of data generated, a key step to achieving comprehensive maritime domain awareness. Better software and analytic tools can help maximize the potential of Blue OSINT and sensor networks, ensuring that intelligence analysts can better inform decision-makers at the speed of relevance.

Strategic deployment of distributed sensors

While Blue OSINT provides valuable insights into chokepoints and shipping lanes, it does not yet offer comprehensive coverage of the open ocean. Its effectiveness is greater in populated and coastal areas, where the density of electronic devices and human activity is significantly higher than on the high seas. Moreover, OSINT data can often be easily manipulated, presenting challenges in ensuring the accuracy and reliability of the information gathered. For example, although ships emitting Automatic Identification System (AIS) signals can be tracked on the web, navies are aware that bad actors often tamper with their transponders in order to disguise their locations, ultimately limiting the signals’ reliability.

To bypass these limitations of open-source data, navies and intelligence agencies can enhance their Blue OSINT capabilities by augmenting them with strategically deployed clandestine sensor networks in key locations, such as harbors, straits, and other critical chokepoints. This combination of data flows allows for effective monitoring and data collection on vessel movements, communications, and adversary intentions. Additionally, other covert sensors can be hidden on the seabed or disguised on civilian vessels, like fishing boats, in regions such as the South China Sea. Using distributed sensors along with Blue OSINT data ensures continuous and comprehensive maritime situational awareness, even in areas less frequented by military assets.

However, fixed sensor networks alone are insufficient to cover the dynamic maritime environment. Deploying a mobile network of distributed sensors necessitates a diverse array of platforms and technologies. While military satellites, ships, and aircraft equipped with advanced sensors can offer intermittent coverage, they are costly and limited in number, and their findings are less easily shareable with partners and allies. To bridge these gaps, allied navies should invest in affordable and scalable solutions such as uncrewed surface vehicles (USVs), UUVs, and UASs. Outfitted with various sensors, these platforms can effectively detect and track adversary movements, ensuring that navies maintain situational awareness across the vast expanse of the Pacific Ocean and other critical regions.

Small UASs launched from naval ships can be used to rapidly surveil large swaths of sea, providing real-time data on both surface and subsurface activities. Recognizing the strategic advantage of uncrewed systems, China has taken a bold step to outpace the US Navy by developing an aircraft carrier specifically designed to launch and recover UASs, rather than sophisticated manned platforms like the J-20 fighter jet. This significant investment in a carrier solely for uncrewed vehicles by the PLAN should prompt the United States to reconsider, and potentially adjust, its future resourcing strategy. Similarly, USVs can conduct long-duration patrols at a fraction of the cost of manned ship operations, exemplified by Saildrone vessels patrolling the Indian Ocean, providing the USN a robust sensor network. UUVs, deployed from submarines or surface ships, can monitor subsea activities, such as the movement of submarines and other submersible assets.

By monitoring the air, sea, and underwater environments, uncrewed vehicles and their sensors can significantly enhance overall maritime situational awareness. However, these tools are only effective if they are integrated into a cohesive architecture that combines traditional intelligence, surveillance, and reconnaissance (ISR) with Blue OSINT data and affordable long-term leave-behind sensors. Project Overmatch exemplifies how to achieve this integration by developing a network that links sensors, shooters, and command nodes across all domains. For instance, Project Overmatch aims to leverage advanced data analytics, artificial intelligence, and secure communications to create a unified maritime operational picture, enabling faster and more informed decision-making. By incorporating these elements, the US Navy can ensure that uncrewed vehicles and their sensors are effectively utilized to maintain operational superiority in the maritime domain.

Moreover, the low-signature nature of some of these sensors increases the odds that they can operate undetected by adversaries, providing a strategic advantage. By deploying sensors in unexpected locations, and disguising them as civilian assets in some cases, navies can gather intelligence without alerting potential threats to their presence.

Blue OSINT and sensor networks in conflict

While Blue OSINT collection and distributed sensor networks can easily collect data in uncontested waters, they have immediate applications to modern maritime conflict as well. For instance, in the event of a cross-strait invasion by the People’s Republic of China (PRC), the transparency provided by Blue OSINT would make it difficult for navies to maneuver undetected. Satellites and social media continuously monitor naval piers, strategic chokepoints, and even some open ocean areas, making it increasingly difficult to achieve tactical surprise. Historical instances—such as Japan’s attack on Pearl Harbor, the D-Day invasion, or the successful surprise dash to transit the English Channel by the German fleet during World War II—would be much harder to achieve in the modern era due to the pervasive nature of Blue OSINT.

In the context of a potential Taiwan invasion, Blue OSINT would likely be used to detect and closely follow Chinese naval activities, including the movement of amphibious assault ships and submarines. OSINT analysts frequently examine satellite imagery of Chinese shipyards and military installations, which could provide early indications of mobilization.

However, relying solely on satellite imagery and AIS for Blue OSINT is insufficient. Multi-intelligence capabilities are essential to provide a comprehensive assessment. For instance, in 2020, two commercial firms collaborated to use radio frequency and synthetic aperture radar collection to detect Chinese illegal, unregulated, and unreported fishing near the Galapagos EEZ. This open-source technique revealed the ability to identify fishing vessels that turned off their AIS to cross into the EEZ. In a future conflict with China, the same methodology of combining multiple Blue OSINT sources could be used to identify and track vessels of the People’s Armed Forces Maritime Militia (PAFMM). This would bypass the AIS vulnerabilities that the PAFMM traditionally exploits to avoid detection, while also revealing its intentions as directed by the PLAN.

The Russo-Ukraine conflict revealed how OSINT can thwart surprise maneuvers and provide crucial targeting data deep behind enemy lines. However, it also underscores the limitations of OSINT in sparsely populated environments, such as the open ocean. For example, in December 2023, as missiles flew over the Red Sea, 18 percent of global container-ship capacity was rerouted. While civilian mariners and commercial shipping significantly contribute to Blue OSINT during peacetime, their absence in a high-risk conflict scenario would shift the burden more heavily onto satellite and uncrewed systems.

Deception and stealth

While the US Navy can take advantage of these technologies, its adversaries can, and almost certainly will, do the same. The US Navy and its allies must develop countermeasures to mitigate the risks posed by sensor networks while also leveraging its benefits. One approach is to invest in advanced deception tactics designed to mislead adversaries. These include the use of decoys, electronic warfare, and signal spoofing to create false targets and confuse enemy sensors. The Navy has been quietly developing these tools to obscure its true movements and intentions, ultimately confounding adversaries and making it harder for them to accurately target US forces.

In addition to deception, the United States and its allies need to enhance their naval stealth capabilities to evade adversaries’ distributed sensor networks. This involves not only minimizing the electromagnetic signatures of their vessels, but also employing innovative designs and operational tactics to reduce their radar cross-sections and avoid detection.

Distributed sensors in conflict

The ability to complement Blue OSINT with distributed sensors will be a decisive factor in near-term conflict dynamics. Just as frontline units in Ukraine are detected and targeted by cheap drones and stationary sensors, naval forces can be identified and pinpointed by similar systems at sea. Distributed sensors can provide continuous monitoring and data collection, ensuring that navies can maintain situational awareness and respond swiftly to emerging threats.

Three pillars are necessary to distribute sensors effectively across the ocean.

First, large conventional fleets play a critical role in maritime strategy. These fleets must be capable of extended operations and diverse missions, providing the backbone of naval presence, power projection, sea lines of communication, and, ultimately, sea control. During the COVID-19 pandemic, the US Navy demonstrated its endurance with record-length deployments, showcasing an advantage that could be significant in future maritime campaigns.

Second, organic reconnaissance drones are essential. Each destroyer and aircraft carrier should be equipped with its own fleet of multi-domain drones to conduct surveillance and gather intelligence. Currently, US carrier strike groups rely on land-launched surveillance drones, which are vulnerable and limited in number. Integrating organic drones into each vessel would enhance situational awareness and operational flexibility, allowing for more effective and autonomous intelligence-gathering capabilities.

Third, large fleets of affordable USVs and UUVs can deploy sensors across the ocean, increasing sensor hours at sea and improving maritime domain awareness. The first Replicator tranche is equipping forces with thousands of attritable systems to turn the Taiwan Strait into “an unmanned hellscape,” demonstrating the strategic value of uncrewed systems in contested waters. Moreover, the Navy is experimenting with diverse types of uncrewed platforms, aiming to create a distributed fleet architecture that is even more lethal than today’s carrier-centric fleet. These unmanned systems provide a cost-effective means to enhance surveillance and reconnaissance capabilities across vast oceanic areas, ensuring that the Navy can maintain a strategic advantage in both peacetime and conflict scenarios.

Recommendations

To maximize the efficacy of maritime domain awareness, it is crucial to integrate data from both Blue OSINT and ubiquitous sensor networks. While these two systems of data collection are largely distinct, their combined use can significantly enhance the accuracy and comprehensiveness of intelligence assessments and naval warfare.

  1. Leverage Blue OSINT. Significant investment in artificial intelligence and advanced analytics is necessary to manage and interpret the endless amounts of data generated by open-source intelligence. By fostering a coordinated approach to maritime security, Blue OSINT can facilitate easier information sharing with allies and partners, but only if its utilization is preplanned. Collaborative pathways for Blue OSINT data collection, processing, and analysis must take shape early in the concept and planning phases. This collaborative effort will significantly enhance collective situational awareness and operational effectiveness, making it easier for navies to synchronize their efforts. Additionally, complementing Blue OSINT with traditional intelligence collection such as HUMINT and SIGINT provides a comprehensive threat assessment. By integrating these capabilities, navies can more easily attain a well-rounded understanding of adversary actions.
  2. Commercially procure distributed sensing capabilities and networks. The US Navy must invest in Replicator-style unmanned platforms that can affordably deploy sensors across maritime battlefields, similar to the use of small UAS for land reconnaissance. These commercially procured distributed sensing platforms will significantly enhance the Navy’s ability to continuously and comprehensively monitor vast areas, improving overall maritime domain awareness.
  3. Recognize a new maritime operating environment. The US Navy must prepare for protracted missions away from easily monitored ports and chokepoints while penetrating adversary-controlled, denied waters. This mission set requires a robust logistical framework capable of supporting extended deployments in remote and contested waters. By developing sophisticated tactics to deceive and confuse distributed sensor networks, the Navy can minimize its visibility to adversaries and maintain strategic surprise. This necessitates investing in advanced deception technologies such as electronic warfare, signal spoofing, and decoys to create false targets and obscure true movements. Additionally, enhancing the stealth capabilities of vessels through innovative designs and operational practices will further ensure that naval forces can evade detection and operate effectively in a sensor-saturated environment. By embracing these realities, the Navy can sustain its operational effectiveness and strategic advantage across the competition continuum.

Conclusion

In an era of distributed sensing networks and Blue OSINT, adaptation is not just about leveraging technology but also about evolving operational doctrines to meet the challenges of contemporary maritime conflicts. By integrating Blue OSINT capabilities, deploying distributed sensors, and countering (and employing) deception, naval forces can maintain an asymmetric advantage in the increasingly visible and contested maritime domain.

The success of modern naval operations hinges on the ability to swiftly adapt to technological advancements and evolving threats. Navies must transcend beyond traditional methods and embrace innovative strategies to remain agile and effective. This demands a concerted effort from all levels of naval leadership, from policymakers to forward operators, to implement these changes.

On the unforgiving sea, only those who rapidly transform to the era of Blue OSINT will avoid the abyss, with the rest risk sinking into obsolescence as adversaries gain decisional advantage. Navies that fail to adjust to the realities of Blue OSINT and sensor networks risk ending up like the Russian Black Sea Fleet: at the bottom of the ocean.

Authors

Guido L. Torres is a nonresident senior fellow with the Atlantic Council’s Forward Defense Program and the executive director of the Irregular Warfare Initiative.

Austin Gray is co-founder and chief strategy officer of Blue Water Autonomy. He previously worked in a Ukrainian drone factory and served in US naval intelligence.

Related content

Forward Defense leads the Atlantic Council’s US and global defense programming, developing actionable recommendations for the United States and its allies and partners to compete, innovate, and navigate the rapidly evolving character of warfare. Through its work on US defense policy and force design, the military applications of advanced technology, space security, strategic deterrence, and defense industrial revitalization, it informs the strategies, policies, and capabilities that the United States will need to deter, and, if necessary, prevail in major-power conflict.

The post Sailing through the spyglass: The strategic advantages of blue OSINT, ubiquitous sensor networks, and deception appeared first on Atlantic Council.

]]>