Global Technology Leaders Back Regulation Of Artificial Intelligence

Technology giants, Sundar Pichai, Elon Musk and Brad Smith have backed the regulation of artificial intelligence. Sundar Pichai, the CEO of Google, the largest Artificial Intelligence company in the world, while writing for the Financial Times, warned against the dangers of keeping artificial intelligence unregulated. Pichai said, issues such as “deep fake” and “nefarious use […]

by Shireen Moti - February 13, 2022, 3:48 pm

Technology giants, Sundar Pichai, Elon Musk and Brad Smith have backed the regulation of artificial intelligence. Sundar Pichai, the CEO of Google, the largest Artificial Intelligence company in the world, while writing for the Financial Times, warned against the dangers of keeping artificial intelligence unregulated. Pichai said, issues such as “deep fake” and “nefarious use of facial recognition technology” show the possible negative impact of artificial intelligence on public safety. Pichai said artificial intelligence needs to be regulated to protect privacy, to ensure public safety, and to prevent bias from influencing the technology.

At the U.S. National Governors Association summer meeting in Providence, R.I., Tesla CEO Elon Musk said that “artificial intelligence is the biggest risk to human civilization.” Musk made his stance clear by suggesting that the Governors in United States must proactively regulate artificial intelligence to avoid the dangers of industries becoming completely autonomous in the future and posing a serious threat to national security. Brad Smith, the President of Microsoft, while speaking at the World Economic Forum in Davos, Switzerland also stressed on the importance of being proactive in regulating artificial intelligence. Smith said that this is the right time to regulate artificial intelligence. The world should start putting in place the necessary ethics, principles and even rules to govern artificial intelligence, rather than waiting for the technology to mature.

Artificial intelligence has changed the world and our daily lives. In the coming years, experts predict a meteoric growth in artificial intelligence. Given the even greater role artificial intelligence is likely to play in our lives in the future, it is important for us to deliberate and discuss some important questions. This is in the best interest of governments, society, and individuals. There are several unanswered questions that arise when it comes to regulating artificial intelligence. To begin with, should artificial intelligence be regulated at all? If yes, who should regulate artificial intelligence? Should industries using artificial intelligence be allowed to regulate themselves or should governments devise regulatory frameworks to regulate artificial intelligence? What should those regulations look like? These are challenging questions, especially for a technology still in its development stages. We have two choices, either we could wait for the technology to mature further or to act proactively.

SHOULD ARTIFICIAL INTELLIGENCE BE REGULATED?

Some technology experts like Azamat Abdoullaev, suggest that artificial intelligence should not be regulated because it is a fundamental technology. When such technologies are regulated at the initial stages of development, it might hamper their growth. Furthermore, even if we intend to regulate artificial intelligence, nobody knows how to do it, at this point. Even more worrisome is the fact that we might end up allowing those people to regulate artificial intelligence, who may not have enough insight about the technology. This could have disastrous consequences, to say the least. Rather than regulating artificial intelligence, its applications, such as, cybersecurity, autonomous driving, and military need regulation.

Contrary to the above stated view, some experts, including Stephen Hawking, Bill Gates, and Elon Musk have taken a more cautious approach and proactive stance when it comes to regulating artificial intelligence. They believe artificial intelligence should be regulated before it is too late. The reason being that unchecked development of artificial intelligence by companies in the race to be faster than the other could pose an existential threat to mankind. It could destroy humanity if we are unable to avoid the risks of unchecked growth of artificial intelligence, such as powerful autonomous weapons. Experts believe there is enough cause to be concerned about the potential harms of artificial intelligence and regulatory measures are a must.

HEAVY-HANDED STATE REGULATION VERSUS SELF-REGULATION

We have two choices, when it comes to regulation. Either we allow governments to regulate artificial intelligence, or we allow the market participants to regulate themselves. On the one hand, immediate and heavy-handed state regulation seems like a plausible solution to our problems. However, this route may have the unintended consequence of stifling innovation and hindering the growth of artificial intelligence. Every country in the world has a vested interest in becoming a world leader in artificial intelligence. Without a global consensus on imposing regulations on artificial intelligence, some countries would be left far behind in this race of being at the forefront of the next revolution and reaping its desired benefits.

On the other hand, if we take a lazzie faire, hands off approach and allow market participants to regulate themselves. However, the problem with self-regulation is that some companies might devise and practice ethical standards and develop “safe and sustainable artificial intelligence”, while others might simply not bother about setting ethical principles in the desire to be the first to develop cutting edge artificial intelligence and become a market leader. A complete hands-off approach is undesirable. At the least, we require a common minimum by way of ethical standards that every company working with artificial intelligence would be compelled to follow.

WHERE DOES THE INDIAN GOVERNMENT STAND ON THE REGULATION OF ARTIFICIAL INTELLIGENCE?

Central government’s think tank NITI Ayog released a policy paper ‘National Strategy for Artificial Intelligence’, in June 2018, wherein amidst other things, the benefits of artificial intelligence were discussed. The policy paper also included the weaknesses of self-regulation of the technology. More recently, in its draft ‘Working Document: Enforcement Mechanisms for Responsible #AIforAll’, in November 2020, NITI Ayog has proposed an oversight body to manage the artificial intelligence policy.

The oversight body is expected to be instrumental in devising guidelines for responsible behavior and for regulating sectoral guidelines. It is proposed that the oversight body would have experts from several fields, including, law, humanities, and social sciences. It will adopt a ‘flexible risk-based approach’ to artificial intelligence. Furthermore, the oversight body is expected to play an enabling role in research, technical, legal, and societal issues emerging from artificial intelligence.

Prof. G.S. Bajpai, criminal law professor and legal scholar, in his June 2019 article on “artificial intelligence, the law and the future” provides that while there is rapid technological advancement, the Indian Parliament has not formulated a comprehensive legislation to regulate the growing industry. Tuhin Patra, a Delhi based TMT lawyer, in his December 2020 article on “ India: Self-Regulation in Artificial Intelligence: An Indian Perspective” says that there is a lacuna when it comes to the legal and regulatory framework to govern companies working with artificial intelligence in India. According to Patra, for the orderly and structured growth of the industry self-audit and record-keeping by companies is a must.

CONCLUSION

To sum up, there is a growing consensus on the accelerated growth of artificial intelligence and its substantial impact on our everyday lives and the world. Rather than deliberating upon the impact of regulating artificial intelligence, we have to take a step back and lay down foundational principles on which regulations could be built in the future. Moreover, we need to make the work of our policy-makers easier, by creating awareness about the fallout of artificial intelligence. World leaders and their governments have to collectively work towards building consensus and developing a comprehensive set of global principles on artificial intelligence. Regulation of artificial intelligence is destined. It is just a matter of when would artificial intelligence be regulated, who would regulate artificial intelligence, and what would the regulations look like. The time is not ripe for regulating artificial intelligence, for now.