INTRODUCTION
Artificial Intelligence (hereinafter referred to as “AI”) and related technologies are crucial to the business ecosystem and are permeating into every sector. As AI gains more control over common subjects and services, it is bound to become potentially unpredictable and cause harm. While research on AI is being conducted all over the world, there are certain potential legal questions raised when it comes to the usage of AI. These range from criminal liability to data privacy concerns. This article addresses some of the key legal issues that crop up with respect to AI, across different sectors, as with an exciting new generation of AI solutions being developed it is essential that the same is regulated by a legal framework which allows AI to make the best possible impact in the economy.
In the event that there exists no legal definition for Artificial Intelligence, for the purpose of this article, Artificial Intelligence is defined as “a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act.”
Before one aims to pin point issues that arise with respect to an AI technology and attribute any liability to an AI technology, it is essential to determine the nature of the AI’s existence. This becomes important as the attribution of any liability would be based on the status granted to the AI in the country. To ensure their accountability under the law, AI entities could be treated as legal personalities, like corporations. Corporate liability of an individual was limited to motivate people to engage in commercial activities through corporations. The same principle could be extended to AI entities. This enables a number of advantages to the existing legal system to tackle upcoming challenges by AI without the need to make drastic changes in the legal system, to effectively solve AI related problems as AI developers are largely concerned about the liability arising from its actions.
KEY LEGAL
CHALLENGES
As mentioned above, assuming that an AI technology is given the status of legal personhood, in India, principles of tort law may be applied, in case of default by the AI technology. When an AI software is defective or use of such software injures the party using the software, it results in legal proceedings under the tort principle of negligence. In the case of AI, the software developer/programmer owes a duty of care to the customer/user. It is of course difficult to decide on the standard of care to be owed to the customer/user. The kind of software being implemented might assist in deciding the standard of duty of care that may be attributed to the software developer/programmer. For instance, if the system involved amounts to an “expert system” the befitting standard of care that would be that of an expert or a professional. Similarly, we could reason that if a person can be held liable for the wrongdoing of a human helper, the recipient of such support could be equally liable if they outsource their duties to a non-human helper instead, considering that such delegation is equally advantageous to them. The policy contention is quite compelling that using the assistance of a self-learning and autonomous machine should not be treated any differently from employing a human auxiliary, if such assistance leads to the harm of a third party . However, to hold the principal liable for the wrongdoing of another, it may be challenging to determine the standard against which the operations of non-human helpers will be assessed in order to emulate the degree of misconduct, as in human auxiliaries. The potential standard should take into consideration, that in many areas of application non-human auxiliaries are more safe and less likely to cause damage to others than human beings, and the law should at least not dissuade their relevance.
Again, assuming that an AI technology is granted the status of a legal person, the AI technology can be held liable under the criminal law system. For criminal liability to be established both the elements of mens rea( mental element) and actus reus (physical act) are essential to be present. The pertinent question that arises here is that how does an AI technology fulfil these two essential aspects of criminal liability? And how is an AI technology liable directly for the commission of an offence?
Assuming an AI is an innocent agent , the obvious question that arises is that who shall be held liable for the crime committed? Here there are two candidates at play, i.e. the programmer of the Al software and the user of the AI software. A programmer of an Al software might design a program in order to commit offences through the Al entity. Both the programmer or the user do not perform any physical act in the commission of the crime and therefore, they do not meet the actus reus requirement of the offence. The legal result of this is that the programmer and the user should be criminally liable, as the principle of mens rea or malafide intention is attributed to them for the specific offence committed, while the Al entity has no criminal liability whatsoever.
In another scenario, assuming there is excess involvement of the programmers or the users in the day to day activities of the Al entity, but without any intention of committing an offence by way of the Al entity, negligence or recklessness should be considered as the standard of mens rea.
Yet another viewpoint suggests that an Al algorithm might have many characteristics and qualifications that exceed those of an average human being, but all such qualities are not essential in order to impose criminal liability. As far as a human or a corporation is concerned, if they are able to fulfil both the essentials of the mental and physical elements, only then can criminal liability be imposed. Similarly, if an AI technology is capable of fulfilling both the essentials of mens rea and actus reus, then criminal liability can be imposed on the AI as well. So long as an AI technology, controls a mechanical or other mechanism to move its moving parts, any act by the AI technology here may be considered as performed by the Al technology itself, thereby fulfilling the requirement of the physical component, i.e. actus reus. As far as the mental element or mens rea is concerned, the only essential requirements that need to be fulfilled under the general ambit of criminal law are knowledge, intent, negligence, etc. Knowledge is defined as sensory reception of factual data and the understanding of that data. Most Al technologies are well- prepared for such kind of reception. The process of analysis in Al systems parallels that of human understanding. The cognitive ability of the human brain understands the data received by senses such as eyes, ears, hands, etc., by analyzing that data. Similarly, advanced AI algorithms are trying to emulate human cognitive patterns. Therefore, if a human being can be held criminally liable for an offence by fulfilling the two criteria of intention and physical act, why should an AI be exempt from the same?
Another potential legal issue that crops up is that of the AI being defective in nature. This attracts product liability. As per the concept of product liability in case of any defect in the product, the manufacturer or the seller of the product is to be held liable for any defect in the product. However, as far as equating an AI technology to a product is concerned, the question that often pops up is that is it fair to hold the creator liable for any injury or harm caused by the AI, as this would inevitably draw an analogy with the principle of strict liability. It is essential that all AI technologies should have limits placed on their ability to cause harm, and it could be argued that there is no better person than the creator to be able to prevent any such harm caused by the AI as well as compensate for any financial losses resulting from such harm.
With an increasing shift in business towards the digital set up, and an increase in the demand of software products, another area of concern, is that of Intellectual Property Rights, particularly the Patent law. As far as protection of AI innovation is concerned, the Patent Act,1970 currently provides protection only to the true and first inventor, which implies a legal person , which includes either a natural person or an artificial person, i.e. a corporation. Section 3(k) of Patent Act 1970 clearly states that “mathematical or business method or computer programme per se or algorithms are excluded from patentability. However, in the recent order by a quasi-judicial body, in the case of Ferid Allani v Union of India has stated that computer inventions that meet the criteria of a ‘technical effect’ , are patentable under the law. This order opens the doors for an enormous corpus of innovation to now become protectable and more valuable as patent protection for innovations in India is essential to foster innovation.
Any discussion on AI is incomplete without addressing the issue of data protection. The functioning of AI is based on the dataset that is used to train the AI’s actions. Therefore, it is essential that such data should be utilized in a safe manner. Since there is a wide range of data collected at an individual’s end, to be utilized, the problem lies with respect to the safe usage of such data. In the event that the Personal Data Protection Bill, 2019 ( hereinafter referred to as “PDP Bill”), is pending before Parliament, the Information Technology Act, 2000 alongside the Information Technology(Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 provide a framework, for protection of sensitive personal information, as far as body corporates are concerned. This apart, the Ministry of Electronics and Information Technology ( hereinafter referred to as “MeitY”) has acknowledged the imbalance with a few companies dominating the market and has recommended that there should be mandatory data sharing mandatory to open up competition in any concerned sector enabling startups, or for other community/ public interest purposes. This is to ensure startups and small medium enterprises are given equal opportunity as compared to big corporate giants and there is no monopoly by corporate giants.
CONCLUSION
Given that AI is a growing industry and India has a tremendous corpus of AI innovators, with the development of an imaginative legal framework to govern the same, AI innovation can be safely unlocked and fostered, in a fashion that is safe and yet dynamic.
DISCLAIMER
The views expressed in this article are that of the author alone and do not reflect the views by any organization.