Google Changes AI Rules, Allows National Security Applications

Google has revised its AI principles, removing a previous commitment to avoid developing AI for weapons and surveillance and emphasizes collaboration with democratic governments on AI that supports national security

Advertisement
Google Changes AI Rules, Allows National Security Applications

Alphabet, the parent company of Google, has updated its artificial intelligence (AI) policy. It no longer promises to avoid AI applications in military areas like weapons development and surveillance.

Company Removes AI Harm Restrictions

Previously, Alphabet’s AI principles ruled out uses “likely to cause harm.” However, the company has now removed this restriction.

In a blog post, Google’s Senior Vice President James Manyika and Google DeepMind’s head Demis Hassabis defended the change. They stated that AI should be developed in partnership with businesses and democratic governments to “support national security.”

Debate Over AI’s Role in Defense and Surveillance

AI experts continue to debate its ethical use. Some argue that commercial interests should not drive AI’s development. Others worry about the risks AI poses to humanity. Meanwhile, AI’s role in military operations and surveillance remains a major controversy.

Google explained in the blog that AI principles from 2018 needed an update.

“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications,” the blog stated.

According to Google, AI is no longer a niche research topic. Instead, it has become as common as mobile phones and the internet. Therefore, the company is now creating new AI guidelines to guide its future direction.

Geopolitical Challenges and AI Leadership

Hassabis and Manyika highlighted increasing global tensions. They argued that democratic nations should lead AI development while maintaining core values like freedom, equality, and human rights.

“We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights,” the blog post said.

They also urged governments, businesses, and organizations with similar values to work together. Their goal is to develop AI that protects people, supports economic growth, and strengthens national security.

Financial Report and AI Investment

Alphabet published its blog post just before its year-end financial report. The report showed weaker-than-expected results, causing a drop in its stock price.

Despite this, digital advertising revenue increased by 10%, partly due to U.S. election-related spending.

Alphabet also announced a massive AI investment. The company plans to spend $75 billion on AI projects in the coming year—29% more than what Wall Street analysts had predicted. This money will go toward AI infrastructure, research, and applications, such as AI-powered search.

AI Expands Across Google Products

Google is integrating AI into its products at a rapid pace. Its AI platform, Gemini, now appears at the top of Google search results. It provides AI-generated summaries and enhances search experiences. Gemini is also available on Google Pixel phones.

Google’s Ethical Shift Over the Years

Google’s approach to AI ethics has changed over time. Initially, company founders Sergey Brin and Larry Page introduced the motto “Don’t be evil.” However, after Alphabet was formed in 2015, the slogan changed to “Do the right thing.”

Despite this shift, some Google employees have pushed back against certain AI projects. In 2018, thousands of workers signed a petition against “Project Maven,” a Pentagon AI contract. They feared it was a step toward using AI for lethal military purposes. Due to this backlash, Google chose not to renew the contract.

Now, with its updated AI policy, Alphabet is signaling a major shift in how it approaches AI ethics and national security.