• HOME»
  • »
  • ARTIFICIAL INTELLIGENCE, ITS SECURITY AND REGULATION

ARTIFICIAL INTELLIGENCE, ITS SECURITY AND REGULATION

23rd October,2019 was a red letter day in human history when American Special Operations Forces carried out a daring raid codenamed “Operation Kayla Mueller” that killed the “crying, whimpering, screaming” self-proclaimed Caliph of ISIS Abu Bakr al-Baghdadi in his own secret hideaway in the outskirts of Barisha, in Northwest Syria. Startlingly, an explosive ordnance disposal […]

Advertisement
ARTIFICIAL INTELLIGENCE, ITS SECURITY AND REGULATION

23rd October,2019 was a red letter day in human history when American Special Operations Forces carried out a daring raid codenamed “Operation Kayla Mueller” that killed the “crying, whimpering, screaming” self-proclaimed Caliph of ISIS Abu Bakr al-Baghdadi in his own secret hideaway in the outskirts of Barisha, in Northwest Syria. Startlingly, an explosive ordnance disposal military robot had participated in the mission! On 31st July, 2022, the Americans eliminated the dreaded Al-Qaeda Chief Ayman al-Zawahiri deep inside the heart of Kabul with the aid of an MQ-9B drone that launched two Hellfire R9X missiles with pinpoint Artificial Intelligence (AI) precision. And this very year itself, around 110 incidents of AI guided drones from Pakistan, the epicentre of international terrorism, clandestinely violating Indian airspace to para-drop arms, explosives and drugs to terrorists and separatists firmly embedded on Indian soil have alarmingly come to light.
AI has come to stay and is predicted to contribute a staggering 15.7 trillion US Dollars to the global economy by the year 2030! It has wormed its way into every conceivable sphere of human activity, and the law is no exception! The prodigious 17th century German polymath Gottfried Leibniz, widely recognised as the grandfather of AI, who was himself a distinguished lawyer, aptly remarked, “It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used.” The legal profession, historically tradition bound and labour intensive, is on the cusp of an unimaginable transformation in which AI has the potentiality to affect the manner and mode in which the legal world functions. Very much like e-mail drastically changed the way we do business, AI would become omnipresent – an indispensable tool for lawyers! The legal sector was one of the first to adopt AI with some leading law firms using AI platforms in some form or the other since the year 2005. A cover story published in the ABA Journal Magazine, the flagship publication of the American Bar Association, elucidiated, “Artificial intelligence is changing the way lawyers think, the way they do business and the way they interact with clients. Artificial intelligence is more than legal technology. It is the next great hope that will revolutionize the legal profession.” Instead of wading through piles of papers, lawyers can now deal with terabytes of data and hundreds of thousands of documents. The eminent American Law Professor Daniel Martin Katz has effectively utilized legal analytics and machine learning to create a highly accurate predictive model for the outcome of American Supreme Court decisions. Sometimes billed as the first robot lawyer, ROSS is an advanced online research tool using natural language processing powered by IBM Watson that provides legal research and analysis and can reportedly read and process a phenomenal million legal pages per minute.
In February, 2018, a group of leading academics and researchers published a report, raising alarm bells about the increasing possibilities that rogue states, criminals, terrorists and other malefactors could conceivably exploit AI capabilities to cause wide spread irreparable damage. Back in 2017, the legendary physicist, Stephen William Hawking, cautioned that the emergence of AI could be the “worst event in the history of our civilization”. To date, no industry standards exist to guide the secure development and maintenance of AI systems. On 3rd February, 2022, U.S. Senator Ron Wyden along with Senator Cory Booker and Representative Yvette Clarke introduced the Algorithmic Accountability Act of 2022, a landmark bill H. R. 6580 in the U.S. House of Representatives to bring new transparency and oversight of software, algorithms and other automated systems. Wyden explained, “Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.” Sen. Booker further explained, “As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalized communities.” And Rep. Clarke struck an optimistic note, “With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalization and seclusion.”
India currently has no laws or government-issued guidelines regulating AI. Instead, the government developed a number of national strategies or road maps related to AI in 2018. On 1st February, 2018, the Union Finance Minister and my dear friend and class mate from my Law Faculty days Arun Jaitley stated that the apex public policy think tank NITI Aayog “would lead the national programme on AI”. Thereafter, the Committee of Secretaries held a meeting on 8th February, 2018, and tasked NITI Aayog with formulating a National Strategy Plan for AI “in consultation with Ministries and Departments concerned, academia and private sector.” On 4th June, 2018, NITI Aayog published a discussion paper on a National Strategy on Artificial Intelligence. On 27th July, 2018, the Government of India’s Committee of Experts released a Draft Protection of Personal Data Bill along with an accompanying report entitled “A Free and Fair Digital Economy Protecting Privacy, Empowering Indians”. The Bill was first introduced in the Lok Sabha on 11th December, 2019. It was then referred to a Joint Parliamentary Committee, which tabled its report in the Lok Sabha on 6th December, 2021. On 3rd August, 2022, the Government unilaterally withdrew the Bill. In a note circulated to MPs, the Union IT Minister Ashwini Vaishnaw explained the raison d’etre for withdrawal of the Bill, “The Personal Data Protection Bill, 2019 was deliberated in great detail by the Joint Committee of Parliament…on considering the report of the JCP, a comprehensive legal framework is being worked upon.” Thereafter, the Minister of State for IT Rajeev Chandrashekhar tweeted, “This will soon be replaced by a comprehensive framework of global standard laws, including digital privacy laws, for contemporary and future challenges and catalyse PM Narendra Modi’s vision of India Techade”.
Cyber-threat actors are becoming increasingly agile and inventive, spurred by the burgeoning base of financial resources and the absence of viable regulation – factors that often stifle innovation for legitimate enterprises. This threat transcends the periphery of any single enterprise or nation in what Pandit Jawaharlal Nehru described as “this one world that can no longer be split into isolated fragments.” There is an imperative need for transparent, incisive and thoughtful collaboration between academics, professional associations, the private sector, regulators and world governing bodies. Strategic collaboration will be more impactful than unilateral responses to address the issue of ethics and regulation in AI. Finally, I am highly emboldened to sound a note of caution by turning to the foreboding words of the renowned American AI researcher, blogger and exponent of human rationality Eliezer Shlomo Yudkowsky, “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Tags:

Advertisement