+

European Union and regulation of artificial intelligence: A step in right direction and its impact on India

In today’s world, there is little that technology cannot do, especially in the high impact sectors. However, the regulations surrounding the same have not kept up with the pace of technological advancement. Thus, forward looking legislations are the need of the hour for taking advantage of the rapidly evolving domain of Artificial Intelligence (‘AI’), while […]

In today’s world, there is little that technology cannot do, especially in the high impact sectors. However, the regulations surrounding the same have not kept up with the pace of technological advancement. Thus, forward looking legislations are the need of the hour for taking advantage of the rapidly evolving domain of Artificial Intelligence (‘AI’), while simultaneously considering the risks that such advancement poses to humankind. One such step is the recent proposal by the European Union (‘EU’) towards formulating a holistic set of regulations for AI (‘the proposed regulation’). This regulation follows in the footsteps of a series of several other regulations including EUs recommendations to the Commission on Civil Law Rules on Robotics in 2017; Commission Report on safety and liability implications of AI, the Internet of Things and Robotics of 2020; Resolution on a framework of ethical aspects of artificial intelligence, robotics and related technologies in 2020; and White Paper on Artificial Intelligence: a European approach to excellence and trust of 2020.

The White Paper focused on the twin goals of promoting AI and addressing the risks that the usage of such AI poses. The proposed regulation seeks to address these by taking a balanced approach. Its basic aim is to make the regulation of AI human centric so that none of the rights of an individual are compromised owing to the (mis)use of AI. The proposed regulation applies to AI being placed on the market, put into service and used in the EU. Perhaps the most distinctive feature of the proposal is that AI systems have been categorised as high-risk and low-risk and obligations are imposed accordingly.

The high-risk systems are subjected to multiple checks and obligations such as establishment and implementation of risk management systems, training of data models being based on already validated and tested data, technical documentation being drawn up beforehand, continuous maintenance of logs, and maintaining transparency with accuracy and robustness of the system. It also imposes certain obligations on the providers of high-risk systems, such as putting in place a quality management system thereby ensuring compliance with the standards established and carrying out conformity assessment with the already mentioned standards. It also gives an opportunity to the providers to take corrective actions in case the AI does not meet the established standards and also to inform the competent authorities in case the risk is imminent. In addition to the producers, obligations are also imposed on the importers, manufacturers, distributors, and the users themselves. The EU is also mandated to appoint the notifying authorities and bodies for carrying out the functions of assessment and checking the conformity without any conflict of interest. The conformity assessment is based on internal control methods and subject to revision in case there is a change in the high-risk AI systems. The providers are tasked with drawing up the EU declaration of conformity which shall be preserved for 10 years after the system is placed on the market.

The proposal does not only impose obligations but also aims at creating an innovation friendly environment before the AI is placed on the market through AI sandboxes. It further provides for the establishment of the competent authorities and the advisory body- European Artificial Intelligence Board. The national supervisory authority is entrusted with conducting market surveillances and reporting them to the Commission. All the bodies handling such information are required to keep the information confidential. It gives the right to the Member States to fix the penalties which are proportionate and effective. Barring a few exceptions, administrative fines of up to 20 000 000 EUR, or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher, can be imposed. There are separate fines for supplying misleading information, or non-compliance with Articles 5 and 10 (which deal with certain types of AI which are completely prohibited and others which have to follow certain requirements respectively). The low-risk AI systems have to follow the transparency obligations under the regulations, and AI systems that pose little or no risk are not covered under the proposed regulation at all.

While the proposed regulation is certainly a step in the right direction, yet it is not without its fair share of drawbacks. The biggest in this regard is the blanket exemption granted under Article 2(3) to the utilization of AI for military purposes. This leaves the entire regulation prone to abuse as AI’s involvement in the military is increasing dramatically day-by-day. With wars and espionage also playing out in the digital domain nowadays, the marriage of AI and military raises numerous securities, ethics and legal concerns. One can look at the $10 billion-dollar Joint Enterprise Defense Infrastructure (JEDI) contract awarded by the US Department of Defense for the upgradation of military technology to gauge the lucrative and risky nature of AI in armed forces. The contract drew bids from the biggest tech companies in the world such as Amazon, IBM, and Oracle, before being awarded to Microsoft. This makes civilian data, which is also handled by these companies, prone to easy exploitation by the military. Another grim instance involves the utilization of the software company Palantir in Project Maven by the Pentagon to build AI unmanned drones for bombings and intelligence. Technology has been repeatedly abused by militaries in the past, best exemplified through examples such as the NSA snooping scandal in 2013. This serves as an unpleasant reminder that limitations must be placed on the ability of the military to leverage technology, especially for powerful tools such as AI. Therefore, the proposed regulation should prescribe the necessary sanctions.

Moreover, if we look at Article 5(1)(d), it allows for data to be collected, stored and utilized for specific criminal activities. What it does not answer is the period for which the data will be stored. For instance, if the case remains unsolved, or is tied up in courtroom proceedings for years, would the data then be retained indefinitely? Additionally, under Article 9, although a risk management system has been prescribed for the AI, it has not been clarified if it will involve a human oversight mechanism, which, according to the authors, is essential in avoiding errors. Although Article 14 prescribes certain oversight mechanisms, it is unclear if that extends to the risk management system as well.

Another important issue that the proposed regulation does not touch upon, is the sentience of AI.  In 2016, the experimental Microsoft chatbot AI, ‘Tay’ went rogue on Twitter, swearing and making racist and inflammatory remarks. This was never intended, yet the AI went on this tirade after analysing anonymized data, barely 24 hours after it was made functional. There were reports of Uber’s AI in self-driving cars, unauthorizedly jumping red lights during demo tests. There was also an instance of Facebook having to shut down some experimental robots, after they developed their own language. Such a scenario raises a whole plethora of questions regarding liability, containment and rectification of actions of the AI. Which is why, by not dealing with it, the legislation leaves a lot to be desired.

Another issue that hits close to home, is the influence that the proposed regulation will have on Indian legislations in this domain. Indian policy makers have demonstrated a propensity for adopting foreign legislations, especially those from the EU, concerning technology verbatim. Our Personal Data Protection Bill 2018 borrowed extensively from the EU GDPR 2016. After the EU came out with its recommendations on flow of Non-Personal Data in May 2019, India followed suit a few months later by appointing an expert committee for providing recommendations. The Non-Personal Data Framework of India, released in 2020, again relies heavily on its EU counterpart. This trend of slightly modifying and appropriating EU regulations in India without analysing the commercial, social and political considerations of India is unfortunate. However, going by this trend, there is a high probability that the proposed regulation will also be copied by India. As a result of which, we have to keep a close watch on the developments concerning it.

Everything considered, the proposed regulation is a revolutionary one as it is the first major attempt to regulate AI.

The leadership and tenacity of the EU in tackling technology issues and passing the relevant legislations much before the rest of the world, is commendable. With some modifications, this proposed regulation can serve as an excellent legal blueprint for countries around the world who are looking to tackle AI related issues. However, the efficacy and utility of it, will be best explained with the passage of time. And that time is ticking down rapidly. 

Tags: