Home > Opinion > Responsibility of states for artificial intelligence under ARSIWA: An emerging challenge for international law

Responsibility of states for artificial intelligence under ARSIWA: An emerging challenge for international law

Author: DR. S. SUKANYA IYER
Last Updated: March 11, 2026 02:35:49 IST

The rapid development and deployment of Artificial Intelligence (AI) across military, security, economic, and governance sectors has created unprecedented opportunities and risks for states. Governments around the world increasingly rely on AI systems for border surveillance, cyber operations, autonomous weapons, and public decision-making. While these technologies promise efficiency and strategic advantage, they also raise serious concerns about accountability when AI systems cause harm. In this context, the framework of the Articles on the Responsibility of States for Internationally Wrongful Acts (ARSIWA) becomes highly relevant. Although ARSIWA was drafted before the emergence of modern AI technologies, its principles remain applicable in determining how states should be held responsible for wrongful acts involving AI systems.

At the core of ARSIWA is the principle that a state is responsible for any internationally wrongful act attributable to it and that constitutes a breach of an international obligation. The technological method through which the act is committed does not alter this responsibility. Whether the act is carried out by a human agent, an automated system, or a sophisticated AI algorithm, the state cannot avoid accountability simply by pointing to the technology involved. This fundamental rule ensures that the development of new technologies does not weaken the existing structure of international responsibility.

Articles 4 and 5 of ARSIWA are particularly important in this discussion. Article 4 provides that the conduct of any state organ shall be considered an act of the state under international law. If a government ministry, military unit, intelligence agency, or border authority deploys an AI system and that system commits an act that violates international law, the conduct is attributable to the state. The fact that an algorithm or autonomous system executed the action does not break the chain of attribution. Similarly, Article 5 extends responsibility to entities that are empowered by domestic law to exercise elements of governmental authority. In many countries, private technology companies develop and operate AI systems on behalf of the government. If such entities perform public functions such as surveillance, data analysis, or security operations their actions may still be attributed to the state under ARSIWA.

One of the most significant legal principles relevant to AI governance is the concept of due diligence. Under customary international law, states have an obligation to ensure that activities within their territory or under their jurisdiction do not cause harm to other states. This obligation requires states to regulate, supervise, and control actors operating within their jurisdiction. In the context of AI, due diligence implies that states must adopt regulatory frameworks, monitoring mechanisms, and safety protocols to prevent harmful uses of AI technologies.

The importance of this principle was highlighted in the Corfu Channel case decided by the International Court of Justice. The Court emphasized that states must not allow their territory to be used in ways that violate the rights of other states. Translating this principle into the AI context, a state cannot ignore the activities of companies, research institutions, or individuals developing AI systems capable of causing cross-border harm. For example, if an AI system developed within a state’s territory is used to conduct large-scale cyber attacks against another country, the host state may bear responsibility if it failed to exercise reasonable control or oversight.

Another important dimension of state responsibility involves indirect participation in wrongful acts. International law recognizes that responsibility may arise not only from direct actions but also from support, assistance, or encouragement of unlawful conduct. This principle was articulated in the case concerning Military and Paramilitary Activities in and against Nicaragua, where the International Court of Justice examined the extent of responsibility for indirect involvement in armed activities. In the context of AI, such indirect responsibility may arise when states provide technological support, training, data, or infrastructure that enables harmful AI-driven operations.

The military use of AI presents some of the most complex legal challenges. Autonomous weapons systems, which can select and engage targets without direct human control, raise serious questions regarding compliance with international humanitarian law. The principles of distinction, proportionality, and precaution—central to the Geneva Conventions—require that parties to an armed conflict differentiate between civilians and combatants and avoid excessive harm to civilian populations. If an autonomous weapon system fails to make such distinctions and causes unlawful civilian casualties, determining responsibility becomes difficult but not impossible. Under ARSIWA, the state deploying the system remains responsible because it made the decision to develop, deploy, and rely on that technology.

Beyond warfare, AI is also increasingly used in border control, surveillance, migration management, and predictive policing. These systems can significantly affect human rights, including the rights to privacy, equality, and due process. When AI-driven decisions lead to discriminatory outcomes, wrongful detention, or unlawful surveillance, states may be held responsible for breaching their international human rights obligations. The reliance on automated systems does not absolve governments from ensuring that their policies comply with international law.

A further complication arises from the global and collaborative nature of AI development. Modern AI systems are often built using data sets, algorithms, and infrastructure sourced from multiple countries and private actors. Cloud computing services, multinational technology companies, and cross-border research collaborations make it difficult to identify a single actor responsible for a harmful outcome. In such cases, questions of shared or distributed responsibility may arise. While ARSIWA primarily focuses on the responsibility of individual states, its principles can still be applied to situations involving multiple actors by examining attribution, contribution, and control.

In addition, the increasing involvement of private corporations in AI development highlights the importance of effective domestic regulation. States cannot simply claim that harmful actions were committed by private companies beyond their control. International law expects states to establish legal frameworks that regulate corporate conduct, especially when these corporations operate in sectors with potential international consequences. Failure to regulate private actors may itself constitute a breach of the due diligence obligation.

Despite the challenges posed by emerging technologies, ARSIWA remains a flexible and resilient framework. Its principles of attribution, breach, and responsibility are broad enough to accommodate new forms of conduct, including those involving AI systems. Rather than requiring entirely new legal doctrines, international law can adapt existing rules to address technological developments.

However, this does not mean that the current framework is sufficient on its own. The complexity and autonomy of modern AI systems raise questions about foreseeability, control, and accountability that were not fully anticipated when ARSIWA was drafted. As a result, states and international organizations are increasingly discussing new regulatory approaches, including international guidelines, ethical standards, and possibly future treaties on AI governance.

In conclusion, the rise of artificial intelligence does not eliminate the responsibility of states under international law. On the contrary, it reinforces the importance of established legal principles such as attribution, due diligence, and accountability. Under ARSIWA, states remain responsible for wrongful acts committed through AI systems that they develop, deploy, or fail to regulate adequately. As AI continues to transform global politics and security, international law must evolve to ensure that technological innovation does not come at the expense of responsibility, stability, and the rule of law.

Prof. Abhinav Mehrotra is an Associate Professor and Deputy Director at O.P. Jindal Global University. His research interests include international law, Human rights law, UN studies, Refugee law, Child rights, and Transitional Justice.

Dr. Biswanath Gupta is an Associate Professor at O.P. Jindal Global University. His research interests include International Law and Space Law.

Latest News

The Daily Guardian is India’s fastest
growing News channel and enjoy highest
viewership and highest time spent amongst
educated urban Indians.

Follow Us

© Copyright ITV Network Ltd 2025. All right reserved.

The Daily Guardian is India’s fastest growing News channel and enjoy highest viewership and highest time spent amongst educated urban Indians.

© Copyright ITV Network Ltd 2025. All right reserved.