+
  • HOME»
  • Artificial intelligence ecosystem: Policy and legal analysis

Artificial intelligence ecosystem: Policy and legal analysis

Formulating a legal ecosystem in the age of artificial intelligence.

Technology is curating and strengthening the structure of society for the future. While the first industrial revolution transformed the manufacturing sector through water and steam power, the second transfigured production and assembly lines using electricity, and the third industrial revolution was the era of integration of computers for the digitization of manufacturing. While the third Industrial revolution was disruptive in terms of transforming the entire tech community, the fourth Industrial Revolution aims at enhancing the use of computers through automation, internet of things, big data, internet of systems, additive manufacturing, 3D printing and machine learning.

The fourth industrial revolution can be deciphered as an extension of the earlier revolution. Today, Industry 4.0. technology is an unmitigable by-product of the socio-technological revolution. It is becoming ubiquitous and its relevance is becoming indispensable with each passing day. There is a clarion call to develop a roadmap for public policy-makers and mainstream national policy conversations to ensure coherence between technology and rights.

Artificial intelligence is the driver of this revolution. The classic definition of AI dates back to 1955 when term “Artificial intelligence” was first coined by John McCarthy and others, in their paper, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’ to mean, “the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving”.

In the current geo-economical state, Artificial Intelligence is a legally untamed and unregulated innovation. Unregulated and derailed technological factors will never bring welfare and would certainly lead to chaos. Most of the countries are yet to regulate the use of Artificial Intelligence-based technologies. As concerns relating to privacy, rights, liability and responsibility are collaterals auxiliaries of AI, there is a need to reaffirm the importance of a legal framework to modulate AI. Issues like big data’s disparate impact, fixing liability for automated vehicles, algorithmic harms, defining justice in automated law enforcement machinery calls for an interdisciplinary policy approach.

There is a need to evaluate the discourse of Industry 4.0. through the prism of ‘rights’ and ‘law’. AI is prepping up a radical ecosystem where machine intelligence is replacing human cognitive skills and human labour. By 2050, it is estimated that half of all current workplace tasks will be automated by machines. Considering the productivity benefits that come along with AI, improper economic planning can also lead to mass unemployment or even harmful discrimination. Given its immense potential to transform the economic ecosystem, AI social governance regulation demands a collaborative effort and robust regulatory framework that streamlines innovation and derails perils.

Issues of privacy, precision education and individual autonomy are compounded by ambiguities about the impact of AI-enabled technology penetrating cognitive development. A collaborative effort is also required to conduct more research to delve deeper into the potential human rights harms of AI systems. Systemic investments should be made in creating structures to respond to these risks. In a theoretical sense, machines have no moral capability. However, when technology permeates into human cognitive tasks, a scenario without moral responsibility would be an unethical experiment. The dilemma is to incorporate basic moral elements into the functioning of the system.

The fundamental issue to keep in mind is that to govern AI nations may need to use AI. The existing legal framework, political institutions, financial bodies and government standards are largely lacking predictive technologies competence to detect the manipulative capacity of AI. This paradigm shift in the foundational structures of society and economy is demanding a new governance model based on ‘human rights’, ‘law’ and ‘ethics.’ Inequality can also be sensed in the geopolitics, as some countries are ushering this change through transformative policy documents, while many countries are struggling to ride the tide. Weaponization, privacy, national security, discrimination, bias and cross border data flows are vital concerns that demand a more comprehensive approach to address them using principles of justice and rights.

Recent studies of social media platforms like Facebook and Twitter have shown how big data and algorithms also have the potential to interfere with democracy, freedom of expression and social mobilization. While ethical considerations have largely been the norm to build digital infrastructure, problems like exclusion and inequality cannot be addressed unless we link human rights into the mainstream. In the era of AI, deficient preparedness is unpreparedness. Socio-legal governance remains incomplete without institutionalism. To address the plethora of challenges that have emerged mandates the governments to acknowledge their human rights obligations.

Industry 4.0. requires an effective legal system and safeguards in order to protect and incentivize innovative developers. There is a need to distinguish between AI-assisted human creations and AI-generated creations. There is also a legal position that says that AI should not have a legal personality and that ownership of IP rights should only be granted to humans. Proponents of this theory claim that where AI is used only as a mechanism to assist an “author” in the process of creation of intellectual capital, the present intellectual property legal framework can remain applicable. As AI systems evolve to produce “cultural artefacts, ranging from audio to text to images”, intellectual property issues will be at the forefront of the new technology.

Considering AI’s interaction with intangible legal elements, the ascertaining of liability through rights and law framework can be used for assessing damages caused due to technology. Fundamental questions like can a fully automated machine or “un-natural” person be an author or an inventor needs to be relooked. The realm of “authorship”, “invention” and “ownership” requires rethinking and widening to incorporate non-human actors. There is a need for Harmonization of laws with the contemporary challenges of Industry 4.0.

Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, says that “a person or an entity on behalf of whom a programme was created must, ultimately, be liable for any action generated by the machine”. The dichotomy of digital creation and creator’s liability needs to find its convergence in the legal framework. For example, the liability caused by AI could be categorised as product liability or negligence/malpractice in the contours of law.

Multilateralization of legal assessments, international rights monitoring and human dignity approaches are the global values to be assimilated in policies to operationalise legal rights. AI-enabled future is no more a sci-fi dream, we are entering a living reality. Optimism has to be guided by logic and fear should be guided by facts.

Countries are coming forward to develop legal conditions and procedural prerequisites to regulate the development of technologies that use AI. For instance, South Korea adopted the “Intelligent Robots Development Distribution Promotion Act” in 2008. The act was adopted to ameliorate the quality of life and to develop the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. In 2018, France presented the national AI strategy. France plans to invest 1.5 billion euros, in the next five years to promote research and innovation in AI-related technologies.

With technologies like AI, blockchain, nanotechnology, the Internet of Things, and autonomous vehicles fuelling the revolution, the global community has to ensure collective responsibility to maintain a balance between human rights, sustainable development and technology. To protect legal rights, analysis of AI’s big data must be treated the same as any other personal data. Similarly, handling of data and information by private entities should be according to the government-mandated regulations.

The future of human rights in Industry 4.0. is pivoted on the Government’s public policy approaches to anticipate human rights concerns and mitigate threats through policies. The most essential duty is cast upon governments to use high standard technologies procured through an open and transparent process.

The birth of new concerns requires the authorities to develop a framework for “human rights assessments”.

The question also arises regarding the application of ethics and moral relativism. The underlying theme behind this way of thought is an element of artificial intelligence without moral functionality is not intellect but would remain a mechanic response. Consciousness and responsibility form the edifice of ethical machine intelligence.

Development and deployment of AI are imperative to ensure sustainability, but the real task is to ensure ethical and responsible contrivance. There is a need to focus more on building research and development in AI through a multi-stakeholder model engaging the public, private, academia together to elucidate crucial legal and ethical challenges. Digital information technology is redefining how people interact with one another, how consumers interact with producers and how innovators interact with technology.

There is a need to develop a legal ecosystem that can accommodate the changing character of these interactions in the fourth industrial revolution.

(Sajid Sheikh is Assistant Professor of Law at Maharashtra National Law University, Mumbai & Adithya Anil Variath is an LL.M. Scholar at Dharmashastra National Law University, Jabalpur.)

Considering AI’s interaction with intangible legal elements, the ascertaining of liability through rights and law framework can be used for assessing damages caused due to technology. Fundamental questions like can a fully automated machine or “un-natural” person be an author or an inventor needs to be relooked. The realm of “authorship”, “invention” and “ownership” requires rethinking and widening to incorporate non-human actors. There is a need for harmonisation of laws with the contemporary challenges of Industry 4.0.

Advertisement