• HOME»
  • Opinion»
  • Artificial Intelligence and its criminal liability in the Indian context

Artificial Intelligence and its criminal liability in the Indian context

Agents are artificially intelligent organisms that can act autonomously with little to no human intervention. We have reached a point in information technology development where we must accept the existence of agents. With the advancement of artificial intelligence, modern technological advances are beginning to supplement or replace human occupations. Self-driving cars, machine translation software, robots, […]

Advertisement
Artificial Intelligence and its criminal liability in the Indian context

Agents are artificially intelligent organisms that can act autonomously with little to no human intervention. We have reached a point in information technology development where we must accept the existence of agents. With the advancement of artificial intelligence, modern technological advances are beginning to supplement or replace human occupations. Self-driving cars, machine translation software, robots, and medical diagnosis software are among examples. The healthcare and transportation industries are only two of the many industries affected by this trend.
These advances typically entail human brain capacities such as interpretation, judgment, and decision-making that have never been bestowed upon a mind other than a human mind. Artificial intelligence-based entities, on the other hand, have the capacity to violate either individual or group legal rights. The question of who is responsible for crimes committed by artificial intelligence has become more complicated since its inception. This is primarily due to the fact that AI operates autonomously and is only partially under human control. It is time to assess the legal liability of businesses that use artificial intelligence.

RESEARCHERS’ VIEWPOINT
Some AI researchers argue that only systems that duplicate human cognition qualify as “artificially intelligent,” while others argue that any system that accomplishes so qualifies. Many “artificially intelligent” algorithms are classified as complex informational networks by information network researchers. In contrast, true artificial intelligence is designed for “wisdom,” or high-level strategic reasoning. In recent years, artificial intelligence (AI) has become more ubiquitous and has achieved significant advances. Almost every industry is investing extensively in order to capitalize on the opportunities presented by AI. Technology has the potential to significantly improve an organization’s overall creativity and productivity. However, the more problems this technology causes, the more people use it. A typical knowledge gap among programmers is understanding how artificial intelligences learn, adapt to unexpected conditions, and make decisions.

AI AND LEGAL LIABILITY
The law outlines our legal rights and responsibilities. To fulfil one’s legal responsibilities and be eligible for particular benefits, one must obey the law. Artificial intelligence legal theory tackles the subject of whether AI should have legal rights and obligations as a result. Despite the fact that the reaction appears to be progressive and advanced, a study of artificial intelligence’s legal personality should be included because it would make AI accountable for its actions.

CRIMINAL LIABILITY
Artificial intelligence will need to gain legal personhood because it will be subject to criminal liability akin to the commercial criminal culpability recognized by some legal systems. This is true if criminal culpability for AI is established. The vast majority of individuals believe corporation criminal liability is a fantasy. This imaginary kind of responsibility holds a company liable for the actions of its employees and other agents. Unlike corporations, AI would only be held accountable for its own acts and not those of others.

WHY DO CRIMINAL LAWS CARRY A DIFFERENT APPROACH FOR AI?
A person or group of persons who commit a crime against another person will undoubtedly face prosecution under the criminal laws of the country where the crime was committed. Any illegal behaviour committed by AI against humanity cannot be regarded as a traditional crime, even if it was aided by software or a robot not designed by the same person. This is because machines with AI are not regarded as human beings. Before we can decide whether crimes committed by artificial intelligence are punishable, we must first determine if it is a legal entity in and of itself. It is also vital to define and recognize the phrases “actus reus” and “mens rea,” which refer to the act of the crime and its mental (intentional) component.

DEFINING THE ‘BLACK BOX’ PROBLEM
Smartphone and computer users continue to rely on complicated algorithms to solve problems and perform even the most basic actions efficiently. It is critical that these algorithms function correctly and without errors, and that we provide the necessary information, as this will aid in the creation of future algorithms. However, explaining what’s going on inside is difficult because we’re continuously hitting hurdles while seeking to comprehend how the AI works. Even while it is still a big problem, it currently only impacts large-scale deep learning systems and the human brain. Because artificial intelligence is composed of complex algorithms and data sets generated by software rather than humans, these neural systems divide the problem into zillions of bits and methodically evaluate each one, bit by bit, to produce the most accurate conclusion possible. It is critical to remember that the primary question in any investigation into whether or not the relevant defendant was liable for the civil or criminal action is whether or not the relevant defendant’s actions or omissions were unlawful as a result of the relevant AI framework’s judgments and recommendations. Which of their actions or omissions resulted in the contractual violations, carelessness, and criminal charges they were facing? However, it is critical to emphasize that the defendant will be a human, not an AI system. This is owing to the magnitude of the discrepancy. The court will not need to know why the relevant AI mechanism made the choice that resulted in the defendant’s allegedly criminal act or omission in order to answer to these types of queries”.

AI’S LAWFUL GROUNDS
Unlike other laws, these recognize and refer to an unborn child. As a legal gray area, the act is inherently problematic because it is ambiguous regarding the concept of safeguards provided to these unborn and responsibilities assigned to specific fetuses. AI technologies, which are still in the early stages of development, are not yet recognized by Indian law. Even if AI systems are not given rights, responsibilities, or liabilities, this is a concerning possibility. Because a person’s or organization’s legal position is inextricably linked to their independence, this status can also be granted to cooperatives, enterprises, and other groups. Except for Saudi Arabia, none of the other legal systems have recognized AI as a legal entity. Whereas the state, represented by a robot named Sophia, has recognized a deceptively wise humanoid named Sophia as a resident of the country with restrictions comparable to those of people, a devout individual living within the express, the state has perceived Sophia as a resident of the country. The complexity of developing legal frameworks for AI is determined by whether or not it is possible to grant it the same rights and duties as a human being.

RELEVANCE OF SECTION 111 OF INDIAN PENAL CODE
The concept of desired outcome is established in Section 111 of Chapter V of the Indian Penal Code (IPC). According to this theory, doing something and supporting someone else have quite different impacts. This fundamental premise forms the basis of Indian criminal law. Unless there is a planned conclusion, the abettor will be held accountable for the criminal’s actions in the same way as he would be if he had assisted the criminal in committing the crime. It is widely understood that in order to be penalized for aiding and abetting, an act must first be committed. In other scenarios, if there is insufficient evidence to prosecute the offender but enough evidence to convict the help, the criminal may be found not guilty, but the assistance is likely to be found guilty based on the facts and circumstances. This occurs when there is enough evidence to convict the accomplice but not enough evidence to prosecute the offender. As a result, if the creators and managers of AI platforms understood the behaviour was an expected or natural result of using their AI system, they might be held accountable for it. When putting this theory into practice, it is necessary to distinguish between AI systems designed with the intent of being used for illegal operations and those designed for legitimate purposes other than criminal behaviour; it is also critical to determine whether the AI system is aware of any improper intents. This notion applies to AI systems in the first group; however, severe accountability would still apply to AI systems in the second group, even if they could not be punished for crimes due to a lack of information.

CONCLUSION
Some people believe that our future is bright and that the possibilities are far greater than we are now able to imagine, despite the fact that a future with powerful AI-powered robots and technology may initially appear horrifying. Experts have recently been discussing the dangers of artificial intelligence and have envisioned a grim future that is reminiscent of the movie Terminator. These experts have concentrated their attention on the potential drawbacks of artificial intelligence rather than investigating the potential benefits of computer-based intelligence and the ways in which we may be able to use it to improve ourselves, build an ideal society, or even investigate other universes. This gloomy view must be allowed to impede progress made in AI since it is ineffective and must be avoided at all costs. At this time, no state nor international law recognizes artificial intelligence as a separate legal entity that may be subject to its own rules. This suggests that it is not accountable for any harm that it may do in the future. “Article 12 of the United Nations Convention” on the Use of Electronic Communications in International Contracts outlines the principle that the person whose instruction the system was given is ultimately responsible for any action taken or message sent by that system. As a result of this principle, the responsibility for any actions taken or messages sent by AI may ultimately fall on the person who gave the instruction to the system.
Adv. Prachi Gupta, LL.M, is a Research Scholar.

Tags:

Advertisement