• HOME»
  • Legally Speaking»
  • Legal personhood, liability and future of artificial intelligence: Thinking it through

Legal personhood, liability and future of artificial intelligence: Thinking it through

When we look at artificially intelligent devices, or independently function-able machines with deep learning capabilities, can they be classified as persons? Can it be argued that now that these devices are able to replicate human emotions and gestures, they have an apparently independent intellect that they are now human-like?

Advertisement
Legal personhood, liability and future of artificial intelligence: Thinking it through

The Background

The understanding of ‘personhood’ is fundamental to any understanding of law. So, who is a person? If one wants to enter into a contract or hold a property, they need to be a ‘legal person’. Some of the Hon’ble High Courts in India in their exceptional judgments have held that animals too have a personality and hence enjoy certain rights. A Court in New Zealand declared Whanganui river which is a home to the Maori tribe to be a legal person. If we refer to the Black’s Law Dictionary, it defines a legal person as an entity “given certain legal rights and duties of a human being; a being, real or imaginary, who for the purpose of legal reasoning is treated more or less as a human being.”

This definition reflects a modern trend to associate legal personhood with humanity; however, any entity that is capable of bearing rights and duties should, in principle, be capable of being a legal person. In order for us to understand the importance of legal personhood, we must divorce it from humanity. Jurists have often opined that legal person lacks a statutory definition but is rather found on the popular definitions, historical, political, philosophical and theological considerations.

I was first introduced to the concept of ‘corporate personality’, the principle of the independent corporate existence of a company while reading the famous UK company law case of Saloman v. Saloman & Co. This case created a bedrock principle of ‘corporate personality’ across different jurisdictions, including India. While we granted personhood to a body corporate in India, we are also settled with the fact that it cannot be a citizen under the constitutional law of India or the Citizenship Act, 1955. The reason as to why a company cannot be treated as a citizen is that citizenship is available to individuals or natural persons only and not to juristic persons. While most of the countries in the world continue to debate about the granting of personhood status to an artificially intelligent machine, Saudi Arabia in the year 2017 granted citizenship to a humanoid robot, Sophia. Soon after, the concept of an ‘Electronic Person’ was also introduced in the European Union but was soon brought down to dust by the European Commission.

 It can be thus reasonably concluded that the status of personhood may not necessarily rest on the presence or absence of human genome. So, when we look at artificially intelligent devices, or independently function-able machines with deep learning capabilities, can they be classified as persons? Can it be argued that now that these devices are able to replicate human emotions and gestures, they have an apparently independent intellect that they are now human-like?

 Consider a computer system which functions on algorithms and is controlled by Artificial Intelligence, the activities it carries out cannot be predicted by humans. In such a situation, with whom shall the liability rest if the Artificial Intelligence causes harm to a human or damage to a property? The Artificial Intelligence which directed the actions of the machine that caused the damage but lacks personhood status, or the human being who lacks knowledge of how the machine performed or whether if the machine was actually trying to solve a particular problem?

Another argument relates to ‘consciousness’ which is as fascinating as it is important. Some people even define consciousness as being a form of intelligence and thus is only related to humans. Hence, a robot may be called intelligent if it can perform a task without any supervision and if it can constantly improve and learn new things. Afterall, having consciousness means that one is aware of their own existence and/or they can think and/or adapt to their surroundings, etc. But, can we then infer that the robot has become conscious? A reasonable conclusion here will mean that intelligence cannot be equated with consciousness as all animals and plants may not exactly be intelligent as we define it and yet we can agree upon the fact that they are indeed conscious.

Ethics and Trust

The developers of an artificially intelligent machine must be considerate towards the ethical issues which are inherent in AIs. To explain these issues, we may break it down into two important parts: firstly, developers must know their systems and their capabilities and secondly, they must be aware of different types of bias that are potentially present in their systems and also the ones that may creep in at a later stage. This is because machine learning is inherently biased and works on the fundamental assumption of bias. After giving due consideration to the two things stated above, developers can avoid creating systems that may have a negative impact rather than a positive one. It won’t be incorrect to settle with the fact that these machines will become sentient in the near future.

With innovation happening, we have witnessed a series of technological advancements and no matter how much of a good intent one puts behind using these advancements for the good, it might be used and misconstrued in some way to be used for bad. This is generally applicable to the majority of things around us.

Imposing Liability and Granting Intellectual Property Rights

 If we go on to say that the artificially intelligent machines can be held liable for their actions, it could also mean that we acknowledge that these machines have their own intellect. Quite recently, the United States Patents and Trademark Office (USPTO) denied giving patent rights to the work generated by an AI known as Dabus citing that there is no law that confers a machine to own property and only natural persons can be named as inventors.

 A similar contention was raised in July 2011, when a British photographer David Slater travelled to a national park in North Sulawesi, Indonesia, to take pictures of the local wildlife. After Slater left his camera equipment out for a group of wild macaques to explore, the monkeys took a series of photos, including selfies. Once the photos were posted publicly, legal disputes arose around who should own the copyrights —the human photographer who engineered the situation, or the macaques who snapped the photos. The Court settled the dispute with the rationale that since animals are NOT a natural person, they lack statutory standing under the Copyright Act.

In most countries, the Intellectual Property Law focuses on ‘creators’ and ‘inventors’- which means “people” who create and invent. Therefore, the notion of human as inventor is embedded within the Intellectual Property application process, which clearly goes on to say that the laws are framed in terms of human creations.

The need for an amendment in our current laws will be pressing as artificial intelligence becomes more creative and autonomous from humans. In a scenario, the employer of a creator or an inventor becomes the owner of the intellectual property by virtue of the work for hire doctrine or an employment agreement. Perhaps, artificial intelligence could be regulated under the work for hire doctrine in which the employer would own the rights on the work.

Similarly, who would own the intellectual property for a work created by the use of artificial intelligence? In my opinion, it may seem very difficult to answer this question but if one really thinks about it, the answer may be pretty simple. Most of the AIs work on machine learning, which is just another algorithm. So, when it generates a work, the rights should vest with the person who owns the algorithm and/or whoever owns the rights to use the algorithm.

Another very interesting scenario is the coming up of self-driving cars. While most of us are familiar with the concept of ‘Nofault liability’ under the Motor Vehicles (Amendment) Act, 2019, could we really punish the AI if it knocks someone down on the streets? Or should the liability rest with the person who put the car on the automation mode? Or the liability rests with the company who is manufacturing the vehicle?

Let us think for a moment that the self-driving car has two options: to injure a person on the road OR the person inside the car. What should it choose? This is indeed a very difficult decision to make and even more difficult to blame and impose liability based on such decision. It would not be wrong, if I say that we have been facing such issues throughout history. There were times when humans drove horse carriages. Think for a moment that someone on the road made a weird noise or ended up doing something which the horse didn’t like and it injured a pedestrian. Who would you blame? The carriage driver? But he didn’t have any control over the horse. Will you punish the horse? No point in that, right? It is actually difficult to work around these issues.

Hence, liability may be imposed on an artificially intelligent machine using the work for hire doctrine or the concept of next friend. In furtherance of the same, the capabilities and ‘power’ of an AI could also help us in imposing liability on the owner using the doctrine of Strict Liability propounded in the famous English case of Rylands v. Fletcher in 1868. I feel, imposing liability in the case of machines which uses algorithms embedded with ‘self-improvisation’ could be easier in some cases and may help in seeking compensation from the owner of such machines for the damage caused.

Using Artificial Intelligence in Law

‘Virtual Courts’ are now a reality, but has its own limitations. Using technology and using technology effectively are two very different things. The setting up of online courts are being done manually, putting a restraint on the nature and number of cases that are being taken up for online hearing. Kevin Kelly, a famous author, has given a very subtle definition of an AI which is applicable to every work-setting. He says that if he has x, he will add AI to his x. It basically portrays a notion that an AI in one shape, way or form, or in any shape or form, is going to permeate every aspect of human endeavor.

China recently welcomed the ‘brave new world of justice’ with the Courtrooms, presided by an artificially intelligent judge, using blockchains, and cloud computing technologies. While the use of blockchains can help in creating and storing a clearer record of the legal processes, the artificially intelligent judge helps ease the burden on human justices, who monitor the proceedings and make the major rulings in each case.

The use of AI has drastically changed the way legal research used to happen some years ago and has made it more effective and time-saving. Artificial Intelligence algorithms which has the ability to learn by example coupled with the natural language processing and natural language generation capabilities have enabled machines and humans to understand and interact with each other.

 AI powered advances in speech-to-text technology have also made real time transcriptions a reality. I won’t be surprised if we witness chatbots powered by natural language processing capabilities to question clients and provide solutions like lawyers. Such chatbots are being already used in the healthcare sector to run basic diagnoses like real doctors. But I feel, for now, except for the AI assistance in legal research, it is hard to completely replace lawyers or the judges with a machine because law does not only work on precedents but also on interpretations. As we say, ‘depending on facts and circumstances’ of each case.

 The Future

We seldom regret the inventions of air conditioners, refrigerators, wheels or a car. These inventions have transformed our lives in ways we couldn’t have ever imagined. Similarly, AI has begun to transform lives in meaningful ways. Some jobs which require ‘repeatable’ tasks to be done may get automated. The World Economic Forum projects that 75 million jobs will be displaced by AI but the same report also indicates that AI will result in the creation of around 133 million new jobs, 60 million more than will be lost.

Data in every form, is a food for machine learning algorithms to function and needs to be regulated. All the sectors are being swamped by a tsunami of unstructured data. In the Indian context, the current scope of the Information Technology Act, 2000 needs to be broadened or a new legislation should come in place before AI is put to use in majority of sectors. A rock-solid framework such as the Personal Data Protection Bill, 2019 which is pending since long needs to be enacted. The recent committee report on Non-Personal Data by the Ministry of Electronics and Information Technology (MeitY) is a welcome move and pays attention to managing the data landscape in the future.

Needless to say, it has become quintessential for all of us to keep up-skilling and re-skilling ourselves to embrace the existence of AI in the future.

Adv. Nikhil Naren practices law focusing on areas of Intellectual Property, Technology and Artificial Intelligence.

Tags:

Advertisement