Google Gemini becomes the brain of the Pentagon’s new GenAI.mil platform for a “deadlier” military

The era in which tech giants shyly turned down military contracts due to pressure from employees is officially over. In early December 2025, the US Department of Defense (DoD) presented a new platform called GenAI.mil a centralized system that puts the most advanced AI models directly into the hands of military personnel. The first and key partner in this venture is Google, whose Gemini model will serve as the basis for data processing within the Pentagon.

The news, originally reported by TechSpot, marks a dramatic shift in the relationship between Silicon Valley and the military, blurring the lines that once separated commercial software from wartime technology.

What is GenAI.mil?

The new platform is not conceived as an autonomous killer robot from science fiction movies, but as a superior analytical tool. According to Pentagon officials, GenAI.mil will use Google Cloud infrastructure and Gemini models for tasks that have so far suffocated the military bureaucracy: searching internal databases (“Google-quality enterprise search”), summarizing hundreds of pages of operational manuals, generating security lists and assessing risks for mission planning.

READ ABOUT:  Microsoft is relaunching Google Wave, a platform you’ve already forgotten about

Although Google in its announcement insists on “less-lethal” applications focused on logistics and administration, the rhetoric from the Pentagon itself is significantly more aggressive. Defense Secretary Pete Hegseth didn’t mince words, saying the technology aims to make America’s combat forces “more lethal than ever before.”

The end of ethical resistance?

For connoisseurs of opportunities in the technology sector, this partnership represents an ironic echo of the past. Back in 2018, Google found itself at the center of a scandal over “Project Maven,” an initiative that used artificial intelligence to analyze drone footage. Then more than 3,000 employees signed a petition against the project, and the company decided not to renew the contract under pressure. The well-known slogan “Don’t be evil” still had weight back then.

Today, seven years later, geopolitical realities and the AI ​​arms race have changed the narrative. In February of this year, Google quietly removed key parts of its AI principles that limited the work of potentially harmful applications. Faced with competition such as OpenAI (which now cooperates with the defense firm Anduril) and xAI, but also with the accelerated development of technology in China, Google can no longer afford the luxury of moral superiority at the cost of losing the most lucrative government contracts.

READ ABOUT:  New Xiaomi Phone With a 192MP Camera News

The implementation of the Gemini model in military structures, even for “unclassified” tasks, fundamentally changes the speed of decision-making. In modern warfare, the ability to process information quickly is often more important than firepower. If AI can digest intelligence in 30 seconds instead of the three hours it takes a human, it creates a huge tactical advantage.

However, this also opens a pandora’s box. As AI models become more deeply integrated into the military apparatus, the line between “administrative assistance” and “kill chain” will become increasingly blurred. Palmer Luckey, the founder of the company Anduril and one of the most vocal proponents of the use of AI in war, recently stated that there is no “moral high ground” in using inferior technology on the battlefield, reports Tech Spot.

READ ABOUT:  The North Pole has changed position and continues to move towards Russia

Google’s return to the Pentagon is not just business news; it’s a signal that Silicon Valley has embraced its role as the new defense industrial base of the 21st century. The question is no longer Yes will commercial AI be used for warfare, already how i against whom.

Source link