The Download: The startup that says it can stop lightning, and inside OpenAI’s Pentagon deal — news
News/2026-03-08-the-download-the-startup-that-says-it-can-stop-lightning-and-inside-openais-pent
Breaking NewsMar 8, 20263 min read
?Unverified·Single source

The Download: The startup that says it can stop lightning, and inside OpenAI’s Pentagon deal — news

OpenAI Strikes Deal Allowing Pentagon Use of Its AI in Classified Settings

CAMBRIDGE, Mass. — OpenAI has reached an agreement with the U.S. Department of Defense that will permit the Pentagon to use the company’s technologies in classified environments, marking a significant shift in the AI leader’s previously cautious stance on military applications. The deal was negotiated after the Pentagon publicly reprimanded competitor Anthropic, according to reporting in MIT Technology Review’s daily newsletter “The Download.”

The pact comes amid growing demand from the U.S. military for advanced AI tools as geopolitical tensions rise. OpenAI CEO Sam Altman confirmed that negotiations began only after the Pentagon’s public criticism of Anthropic, signaling a pragmatic evolution in the San Francisco-based company’s approach to national-security partnerships.

Skyward Wildfire Claims Ability to Stop Lightning and Prevent Catastrophic Wildfires

In a separate development, Canadian startup Skyward Wildfire says it has developed a method to prevent lightning strikes that ignite many of the world’s most destructive wildfires. The company proposes using cloud-seeding techniques involving metallic chaff to suppress lightning activity in vulnerable areas.

While the underlying theory is considered scientifically sound, results from Skyward Wildfire’s efforts have so far been mixed, according to MIT Technology Review. The startup has not yet released comprehensive public data on its technology or field trials, leaving questions about its real-world effectiveness as wildfire seasons grow longer and more intense due to climate change.

OpenAI’s Evolving Military Relationship

The OpenAI-Pentagon arrangement represents a notable departure from the company’s early positioning as an organization focused primarily on safe, civilian-oriented AI development. The agreement allows classified use of OpenAI models, though specific technical details—including which models are involved, any performance benchmarks, or exact contract value—were not disclosed in the announcement.

The timing appears directly linked to the Pentagon’s recent public rebuke of Anthropic, which reportedly created an opening for OpenAI to engage more deeply with defense officials. Altman’s acknowledgment that talks accelerated only after that reprimand suggests competitive dynamics among leading AI labs are influencing national-security strategy.

Implications for AI and Defense Integration

For the defense sector, access to OpenAI’s frontier models in classified settings could accelerate the military’s ability to deploy large language models and multimodal AI for intelligence analysis, logistics, and decision support. However, the move is likely to reignite debates about the ethical boundaries of commercial AI companies partnering with military organizations.

Developers and enterprise users may see indirect benefits as OpenAI gains additional resources and real-world testing opportunities through government contracts. At the same time, some researchers and civil-society groups have expressed concern that deeper military integration could shift the company’s priorities away from its original mission of building artificial general intelligence that benefits all of humanity.

What’s Next

The full scope and timeline of OpenAI’s Pentagon deployment remain unclear, as both organizations have released limited details. Further announcements regarding specific use cases, model versions approved for classified work, or implementation milestones are expected in the coming months.

Skyward Wildfire, meanwhile, will likely face pressure to publish independent verification of its lightning-suppression claims as wildfire-prone regions seek effective technological interventions. Independent testing and peer-reviewed data will be critical for the startup to move beyond conceptual promise toward operational deployment.

The developments underscore the rapid convergence of commercial AI innovation and national-security priorities, as well as growing experimentation with high-risk geoengineering-style approaches to climate-driven disasters. Both stories highlight how the AI industry and its adjacent fields continue to push technical boundaries while navigating complex questions of safety, governance, and public accountability.

This article is based on reporting from MIT Technology Review’s “The Download” newsletter published March 3, 2026.

Comments

No comments yet. Be the first to share your thoughts!