Artificial Intelligence Responsibilities: Ethical And Legal Issues

Abstract

The subject of the article is the analysis of doctrinal provisions, legal norms as well as the practice of their application on the distribution of responsibility related to the use of artificial intelligence between different subjects (designer, holder, owner, etc.) with determination of expediency and ways of bringing to responsibility of the artificial intelligence as a subject (or quasi-subject) of law. The responsibility of artificial intelligence systems is a complex problem, the complexity of which is due to their autonomy and the ability to self-learn. This factor makes it difficult to distribute the burden of responsibility among the various individuals involved in the creation and operation of artificial intelligence. At the same time, the modern (and in the near visible perspective) development of artificial intelligence systems does not allow one to recognize their qualities of personality and, therefore, to apply to them measures of responsibility as to individuals and legal entities. It is concluded that the solution to this problem lies in the area of supplying artificial intelligence systems with elements of subjectivity. It was substantiated that the concept of supplying the artificial intelligence systems with the quasi-subject of law status. Property ownership will allow implementing both civil law sanctions and administrative and criminal law sanctions. When developing legislation in this area, a balance must be struck between protecting the public interest and the need to stimulate scientific and technological progress in this area.

Keywords: artificial intelligence, robots, legal responsibility, subject of law

Introduction

Issues of legal responsibility are traditionally the most important in relation to the legal regulation of artificial intelligence and robotics. As A.V. Neznamov and B.W. Smith aptly point out in this regard: «Say the word 'robots' among lawyers and you will probably immediately hear: 'Robots... Yes, that's curious. But who will be responsible for them?» (Neznamov and Smith, 2019, p. 135).

P. Asaro suggests that some of the most important legal issues related to robots include the following. The first is the producer’s responsibility [for the quality of the product] (productliability), since robots, from the point of view of the legal regime, are also goods. The second is the legal regime (or legal status) of robots as quasiagents (intermediaries), since robots have been endowed with functions of increasing complexity as they develop. The third is the limitation of liability in the sense that intermediaries due to legal nature of relationship, cannot always bear full responsibility for their actions. The fourth is application of legal liability to legal entities by analogy with application of legal liability to agents who are not individuals (Asaro, 2007).

It is the issues of responsibility of artificial intelligence systems and robots that are of a prior legal regulation. Law-making activity both in the national and supranational legal documents reflects such priority. For example, the European Parliament Resolution No. 2015/2103(INL) «Civil Law Rules on Robotics» (Civil Law Rules on Robotics) (European Parliament, 2017) in significant part of its preamble as well as its separate section is devoted to the problem of liability.

Moreover, this is quite logical, since the artificial intelligence from the legal standpoint by its very nature differs from any other inventions ever created by man, so that in its development it inevitably acquires certain features of subjectivity. In addition, if the object of law in no case can be a subject of responsibility (in the absence of the very legal personality), the subject, according to the general approaches of jurisprudence, is responsible for its actions. Accordingly, even the emergence of some elements of legal personality allows us to raise the question of at least the distribution of responsibility.

Problem Statement

Most papers (and many studies) use the term "robot" rather than "artificial intelligence" to refer to responsibility. It seems that in this context these terms are close, though not quite synonymous. At present there is no single universally recognized notion of a robot. In general, it can be defined as a bodily artificial object or system that has the ability to physically manifest itself, including feeling, processing and influencing the world around it to some extent (Calo, 2015). There are different classifications of robots on various grounds (Neznamov, 2018), but common to all robots is the presence of at least a minimum degree of autonomy, which is determined by a certain level of artificial intelligence embodied in its software. If the artificial intelligence in any case is a certain information system (in the traditional sense as a set of database and software for its processing (Amelin & Channov, 2015)), the robot is an information system endowed with the ability to directly affect the objects of the outside world.

Thus, artificial intelligence always characterizes any robot, which is not equal to the reverse: artificial intelligence can be implemented within a hardware-software complex that does not have the ability to act independently in the physical environment.

Therefore, we believe that artificial intelligence is still a broader concept than a robot, so further in the paper we will focus specifically on the problems of artificial intelligence responsibility. At the same time, it will use normative acts, practice materials as well as doctrinal developments related to legal responsibility of robots, based on the approach that the responsibility of robots always leads to the responsibility of their artificial intelligence.

Research Questions

In view of the foregoing, we see the subject matter of the article as the analysis of doctrinal positions:

  • legal norms of the Russian Federation and foreign countries;
  • practice of their application concerning the distribution of responsibility in relations, connected with usage of artificial intelligence among different subjects (designer, holder, owner, etc.) with determination of expediency and ways of bringing to responsibility of the artificial intelligence as a subject (or quasi-subject) of law.

Purpose of the Study

The purpose of the paper is to analyze different approaches to solving the issues of legal liability for damage caused by artificial intelligence systems and to develop proposals on legal regulation in this area.

Research Methods

The authors used various methods, both general scientific (description, comparison, classification, analysis and synthesis), which were aimed at revealing of current state of legal regulation in the sphere of artificial intelligence responsibility, and special methods of legal science. In particular, the historical-legal method was used to study the development of legal thought and legislation in the field of liability of artificial intelligence systems. Comparative-legal method allowed analyzing the solution of the problem in different national legal systems, as well as at the supranational level.

Findings

As stated above, the issues of artificial intelligence systems' responsibility are directly related to giving them the status of a subject of law. Here it should be noted that globally jurisprudence knows only two types of subjects of law: individuals and social entities (persons). The latter include legally recognized non-biological entities: 1) the State and state bodies; 2) municipalities and local self-government bodies; 3) dependent territories and 4) peoples struggling for their independence; 5) legal entities; 6) international intergovernmental and non-governmental organizations (Popov & Ulyanova, 2006). According to the list, social entities (persons) are essentially the same associations of individuals, i.e. the social entities (person) as a subject of law is derived from an individual.

Thus, when a social entity is held liable as a collective subject, particular individuals are those who suffer negative consequences in the long run, but its burden is shared between them - equally or unequally. Incidentally, the responsibility of legal entities in various legal systems differs quite significantly from the responsibility of individuals (for example, with regard to the definition of guilt of a legal entity, the grounds of responsibility, specific sanctions, etc.).

In our opinion these considerations can also form the basis of the concept of legal responsibility of artificial intelligence systems, at least at present. At the same time, unlike a legal entity, artificial intelligence potentially (at present, it seems to be still remote possibility) may obtain not only the legal status of a full subject of law, but also all the features of a person, which will allow one to talk about its fully independent responsibility, without transferring its consequences to any other subjects.

Here it should be noted that there are currently different approaches to the essence of artificial intelligence and, accordingly, to its definition. So, most experts talk about the possibility of creating at least two types of artificial intelligence – "strong" and "weak".

Weak artificial intelligence is proposed to be understood solely as a program that works according to given algorithms (Searle, (1992). The concept of strong artificial intelligence is based on the recognition of the possibility of creating such an artificial intelligence, which can not only think, but also understand and experience (Migurenko, 2010). "Weak" artificial intelligence is thought to be capable of performing certain types of tasks and is limited by them. "Strong" artificial intelligence (also called "general artificial intelligence") is a real or hypothetical type of named technology that can reach or exceed the level of human intelligence and apply its problem-solving abilities to any problem, just like the human brain (Shchitova, 2019).

It must be said, however, that this interpretation of strong and weak artificial intelligence is not the only one. Bernard Marr points out that "the definitions of artificial intelligence begin to change depending on the goals one is trying to achieve with an artificial intelligence system. Generally, people invest in the development of artificial intelligence for one of these three purposes: creating systems that think just like humans ("strong artificial intelligence"); creating systems that will work without understanding how human thinking works ("weak artificial intelligence"); using human thinking as a model, but not necessarily the end goal" (Marr,2018). Thereby, some experts suggest that strong artificial intelligence should be understood as solely an intelligence that is a full analogue of human intelligence, but created artificially. Others must be perceived as intelligence that has nothing in common with human intelligence, but is able to solve cumulatively similar in complexity or even more complex tasks. Still others must be treated as any intelligence that realizes itself as an independent individual, regardless of whether it is comparable to human intelligence or even significantly inferior to it in terms of intellectual capabilities (Bokovnya et al., 2020).

From the perspective of responsibility, the latter position is undoubtedly of the greatest interest, since independent responsibility in the full sense can only be carried by a self-aware person. The term "superstrong artificial intelligence" is also sometimes used in the scientific literature. This term should be distinguished from the term "superintelligence", since the latter refers to the hypothetical intellect, far superior to human cognitive abilities in all areas of knowledge, which, however, may well not be conscious of itself as a person (Bostrom, 2014), while the "superpower" of artificial intelligence is linked precisely to self-consciousness.

Undoubtedly, the problem of self-consciousness of artificial intelligence is extremely difficult, because when humankind creates arbitrarily advanced artificial intelligence systems, capable of solving even tasks much more complicated than those accessible to humans. There are currently no criteria allowing one to unambiguously define whether the system possesses self-consciousness, at least because such a system will be able to easily "cheat" researchers by imitating the presence of self-consciousness in absence of it (e.g., as J. Searle demonstrated, the "Turing test" is easily circumvented in experiments like the "Chinese room" (Searle, 1992)). In general, however, this question seems to fall into the realm of psychology and philosophy. And, with regard to the topic of our article, it appears possible to state that the creation of such artificial intelligence systems, when (or if) it occurs, will inevitably raise the question of their responsibility as autonomous individuals. We will return to this later on. In the meantime, let us turn to modern artificial intelligence systems (or to those which will be developed in the foreseeable future), which, although not yet fully-fledged subjects of law, nevertheless, possess some rudiments of subjectivity.

Two extreme positions can be distinguished in the development of artificial intelligence systems with respect to liability issues:

1) Artificial intelligence is a person and as such is recognized as a full subject of law. Responsibility for the actions of an artificial intelligence is borne directly by the artificial intelligence itself. As noted above, at present such a construction is purely theoretical.

2) The artificial intellect has no signs of a subject of law (however, here the question arises as to how reasonable it is to consider it an artificial intellect in this case). Responsibility for its actions is entirely imposed on other subjects.

A large number of options lies between these two extremes. Within the scope of these options, legal responsibility for the actions of artificial intelligence is distributed between the manufacturer and other subjects, depending on the degree of independence of artificial intelligence systems. In this case, at certain stages, we can already raise the question of the latter's partial responsibility.

As for other subjects that could potentially be responsible for the actions of artificial intelligence, their range is quite wide. According to existing approaches, they can include:

  • the developer of the concept of the artificial intelligence system;
  • the developer of the software for an artificial intelligence system;
  • the manufacturer of the artificial intelligence system itself;
  • the artificial intelligence system vendor;
  • the owner of an artificial intelligence system;
  • the holder of the artificial intelligence system;
  • third parties who participated in the creation or development of the artificial intelligence system (Coeckelbergh, 2019).

Seemingly, there are no peculiarities in distribution of responsibility of all the above-mentioned persons as compared to responsibility as applied to any ordinary complex object of law. For example, in the case of a passenger car that caused damage to third parties in an accident, depending on the established causes of the accident, liability may be imposed both on the driver and on the manufacturer, seller, etc. – all the way up to the developer of the software that is increasingly being used in modern motor vehicles.

However, the fundamental peculiarity of resolving the issue of responsibility for the action of an artificial intelligence system is that even a weak artificial intelligence has a certain degree of autonomy (as noted above, the absence of this property makes it impossible to raise the very question of its recognition as artificial intelligence). A certain autonomy of actions of the artificial intelligence system is presumed at its creation. The essence of autonomy of an artificial intelligence unit just means that entering some definite data and programming such a unit in a definite way does not necessarily lead to a specific result in response to given circumstances (commands entered) (Beard, 2014). Accordingly, its behavior cannot be fully regulated, and hence its possible actions cannot be fully predicted. As a consequence, "a producer who takes all reasonable measures may nevertheless create a cyber-physical system that will eventually interact with our complex world in a not entirely reasonable way. This outcome is broadly foreseeable, even if the particular failure is unpredictable" (Owen, 2009, p. 1307).

This problem is exacerbated for self-learning systems, which are currently the majority of artificial intelligence systems. Even if it is conscious training of an artificial intelligence system by a specific person, the question of balancing responsibility between him and its creators is already quite complicated. However, self-learning systems learn exactly by themselves, by acquiring certain "experiences". "For example, robots equipped with artificial intelligence are constantly learning and adjusting their behavior, which in turn affects decision-making, with the result that the robot may cause harm that would not be due to defects in its design or human influence" (Tolstov & Sergeeva, 2015, p. 47).

Holding manufacturers or developers solely responsible for the harm caused by their actions is certainly possible, but hardly advisable, as the result of training in such a case becomes dependent not only on the pre-installed in it data, but also on numerous environmental factors, which are simply impossible to predict. As we know, with respect to any offense, the factors (determinants) contributing to their commission are a generalized concept, the content of which are the causes and conditions of offenses (Channov & Dobrobaba, 2020). In this context, a certain crude analogy can be drawn with the modern doctrine of crime factors, which is based on the fact that the criminal behavior of an individual is associated with both the social environment and genetic predisposition (Sitnikova, 2021; Agapov, Barinova and Grib et al., 2006). How to determine who is more to blame for a person's criminal behavior – the social factors influencing him or his inherent genetic disposition? It is also difficult to share the burden of responsibility between the developers and producers of an artificial intelligence system, who are responsible for its "genome," on the one hand, and the owners and other people interacting with this system, who are responsible for the "social environment", on the other.

As P.M. Morhat rightly notes, "in most cases it is reasonable and justifiable to hold developers, creators and programmers of such a unit responsible for errors made by an artificial intelligence unit. But we consider that this rule in not quite applicable in situations when an artificial intelligence unit starts to function in a way that could not have been predicted before, especially if such a unit interacts with other agents in the so-called "Internet of Things", self-trains itself and at the expense of that too" (Morhat, 2018, p. 47).

Legal science proposes different ways of solving this problem. Some of them are related to giving artificial intelligence systems elements of subjectivity. So, Italian scientist U. Pagalo believes that there is no point in developing new legislation on the legal status of artificial intelligence and robots, if we can use well thought-out provisions of Roman law on the legal status of slaves. A robot, like a slave, has no rights and obligations; a robot, like a slave, can make decisions with legal consequences, including for an owner; slaves were endowed with property (peculium). Hence it is necessary to endow Robots with property; slaves and Robots are capable of causing harm, for which their masters are responsible (Pagallo, 2013, p. 102).

Other researchers propose to introduce a new legal construct of "electronic person" for robots equipped with artificial intelligence systems. An electronic person can be interpreted as a personified unity of legal norms that bind and empower an artificial intelligence possessing the criteria of "reasonableness" (Yastrebov, 2018, p. 47). In this regard, it is recommended to endow such electronic persons with certain rights and obligations, including compensation for harm caused by them (Yurenko, 2017). In the scientific papers one can also find a generalized idea of possible special rights of a robot, such as: not to be disabled (against its "will"), the right to full and unhindered access to its code, the right not to be experimented on, the right to make its own copy, the right to "privacy" (Dvorsky, 2012). Basically, in terms of legal status, "electronic persons" occupy a kind of middle ground between a person and a thing.

There are some other options not related to giving robots and other artificial intelligence systems legal personality. In particular, the idea of compulsory liability insurance is the widespread one (Blazheev, Egorova, 2020; Iriskina, Belyakov, 2016). As we see, in this case, the question of the distribution of responsibility is not solved as such, but is substituted by shifting it to the insured. In addition, this construction is applicable only to the property (civil liability) liability of artificial intelligence systems. Meanwhile, in certain situations the question may arise about other types of legal responsibility, in particular, administrative and criminal.

Initially, it may seem that only individuals associated with artificial intelligence systems (developers, sellers, owners, etc.) can act as the subjects of these types of responsibility. However, administrative-law and criminal-law sanctions are not necessarily associated with the impact on specific individuals. Both individuals and legal entities are subject to administrative responsibility for committing a number of offences, to which, however, a limited list of administrative penalties is applied. The Russian legal system does not provide for criminal liability of legal entities, but it is used in many other states (Austria, Belgium, Denmark, Norway, Poland, Portugal, France, China, Israel and others) (Naumov, 2015). At the same time, of course, some limited set of sanctions is applicable to legal entities, by virtue of their specificity.

In this connection, we find the concept of giving artificial intelligence systems the status of quasi-subject of law with the endowment not only of certain rights and obligations, but also of the possibility of owning certain property to be generally not uninteresting. Its presence will allow implementing not only civil law, but also administrative and criminal law sanctions, at least in the form of fines.

This concept seems to combine the merits of the idea of liability insurance for artificial intelligence systems with the idea of so-called "electronic persons". Obviously, in this case the owner (or holder) of the system should be obliged to provide autonomously operating artificial intelligence systems with the necessary "minimum capital", the amount of which may vary depending on the field of activity and other factors, as a kind of insurance.

The question of applying other (non-property) penalties to artificial intelligence systems is more complicated. Thus, nowadays it is doubtful that systems could be subjected to moral punishment measures, as they cannot have any feelings, since they are not human beings (Gouveia, 2019, p. 238). On the other hand, some measures of administrative enforcement, for example, administrative warning, being by its nature a punishment of moral nature, when applied give rise to certain legal consequences (in the form of the state of administrative punishment). In this aspect, their application to systems of artificial intelligence looks quite acceptable.

Regarding criminal responsibility, as G.G. Kamalova rightly notes, "the applicable criminal penalties to "strong" artificial intelligence and decisions based on it also require discussion among academics. When choosing acceptable decisions, it is necessary to take into account the artificial intelligence's lack of ability to feel emotions, including a sense of self-determination and suffering. Therefore, many well-established measures of responsibility seem meaningless when applied to AI-based devices and solutions. At the same time, although today the possibility of "re-education" of artificial intelligence through its reprogramming and disposal of the device is justified, the question remains open as to the nature of these measures and their difference from reprogramming a conventional computer tool and disposal of other things (Kamalova, 2020). However, the situation could change when a physical carrier of artificial intelligence gets the possibility of full autonomy from a human being. If such a carrier will acquire its "personal" ability to comprehend its actual behavior, its possible results and manage its own activity arbitrarily (Kibalnik and Volosyuk 2018), this will allow raising a question of its "personal" responsibility.

At present, however, it would seem that criminal responsibility for the actions of artificial intelligence systems should still be assigned to humans, if there are legally defined grounds for it and the distribution of its burden. At the same time, it seems quite appropriate to establish criminal liability measures not only for specific perpetrators, but also for legal entities, since the development and mass introduction of robots and artificial intelligence systems is usually associated with the activity of large corporations. Of course, this is only possible if there are fundamental provisions on the criminal liability of legal entities (persons) in the country's legislation, but, as noted above, it currently already exists in many states, while in others, including the Russian Federation, the question of its introduction is raised (Naumov, 2015).

At the same time, establishing the measures of liability of individuals and legal entities for the actions of robots and artificial intelligence systems should strike a balance between the protection of public interest and the need to stimulate scientific and technological progress in this area. It should be outlined that some experts propose to formulate corpus delicti of criminal offenses related to artificial intelligence. For example, N.L. Danilov suggests equating artificial intelligence systems to sources of increased danger (which in itself seems quite reasonable), but in connection with this proposes placing on their developers almost absolute responsibility for all their subsequent actions. In particular, Danilov writes, "Artificial intelligence can be created by assuming self-development and independent thinking, going beyond the original program created by man. In this case, it may be a question of the responsibility of the person who created this artificial intelligence. This is determined by the fact that such artificial intelligence can be considered as a source of increased danger, on the basis of which the said person would have to take measures to prevent the threat of causing harm to the protected public relations". He further proposes to prosecute the creator of artificial intelligence in the case if the artificial intelligence created by him has created another artificial intelligence that has already committed a crime. Moreover, if the crime was committed by an artificial intelligence system as a result of "special harmful impact on the learning artificial intelligence by another person who is not its developer", according to N.L. Danilov, the developer must again be held responsible, since "he did not foresee such a possibility". At the same time, this author proposes not to apply statute of limitations in relation to crimes committed through or with the participation of artificial intelligence (Denisov, 2019).

We strongly believe that such a position not only indicates a lack of understanding of the essential features of artificial intelligence (whose behavior, as we noted above, inherently has a certain autonomy and cannot be predicted with absolute accuracy), but also basically represents a serious danger for the development of robotics and artificial intelligence systems in the country. In fact, this researcher suggests introducing criminal liability for creators of artificial intelligence for any crimes committed by the latter, even if they were committed as a result of the criminal influence of third parties (which is a fairly possible situation in relation to teachable artificial intelligence systems), and even without regard to the statute of limitations. Accordingly, any creator must, under penalty of criminal liability, predict all possible ways of influencing the system he creates, including those that do not exist at the time of development, but may appear sometime in the future. To put it bluntly, the task looks absolutely impossible in conditions of rapid technological development.

This approach is somewhat reminiscent of the "exploded bottle" concept used for some time by American courts. Its essence was that, in a particular case, a court held that even if a producer of sparkling water had reasonably tested the bottles in which his product was bottled. He was still liable for injuries caused by the explosion of one of the bottles, which occurred because it was cracked and was not (and probably could not be) detected at all (Escola v. Coca Cola, 1944). Based on this precedent, U.S. courts have begun to hold manufacturers liable for failing to eliminate unreasonable risks in their design that they could not have foreseen at the time they sold the products that ultimately turned out to be dangerous, or for failing to warn the consumer of such risks. By comparison, however, it should be emphasized that those cases still involved civil rather than criminal liability, and, most importantly, they subsequently abandoned that position. At present, in most states, the reasonableness of design or prevention is determined in relation to those risks foreseeable at the time of sale (Neznamov and Smith, 2019).

Accordingly, it appears that imposing overly harsh criminal penalties on creators and manufacturers of artificial intelligence systems can significantly reduce their desire to work in this field and significantly slow down the country's scientific and technological development in this area.

Conclusion

The responsibility of artificial intelligence systems is not a trivial problem, the complexity of which is caused by their autonomy and ability to self-learning. This factor makes it difficult to distribute the burden of responsibility among the various individuals involved in the creation and operation of artificial intelligence. At the same time, modern (and for the nearest visible perspective) development of artificial intelligence systems does not allow one to recognize their qualities of personality and, consequently, to apply to them measures of responsibility as to individuals and legal entities.

The solution to this problem lies in the area of giving artificial intelligence systems elements of subjectivity (legal personality). We have justified that the concept of giving to artificial intelligence systems the status of quasi-subject of law with providing them not only certain rights and obligations, but also possession of certain property could be already realized. Property possession will allow implementing not only civil, but also administrative and criminal law sanctions, at least in the form of fines.

As for other types of punishments, with the exception of an administrative warning, all others can be applied only to individuals. When developing legislation in this area, there should be a balance between protecting the interests of society and the need to stimulate scientific and technological progress in this area.

Acknowledgments

The authors express their gratitude to the RFBR for financial support of scientific research No. 20-011- 00765 "Constitutional and legal mechanisms for the implementation of social rights and freedoms using artificial intelligence: problems of legal regulation, limits, and responsibility" within the framework of which this article was prepared.

References

  • Agapov, A. F., Barinova, L. V., & Grib, V. G. (2006). Criminology. Justitsinform.

  • Amelin, R., & Channov, S. (2015, November). State information systems in e-government in the Russian Federation: problems of legal regulation. Proceedings of the 2015 2nd International Conference on Electronic Governance and Open Society: Challenges in Eurasia, 129-132.

  • Asaro, P. M. (2007). Robots and Responsibility from a Legal Perspective. http://www.peterasaro.org/writing/ASARO%20Legal%20Perspective.pdf

  • Beard, J. M. (2014). Autonomous weapons and human responsibilities. Georgetown Journal of International Law, 45, 617–681.

  • Blazheev, V. V., & Egorov, M. A. (Ed.). (2020). Digital law. Prospect.

  • Bokovnya, A. Yu., Begishev, I. R., Khisamova, Z. I., Narimanova, N. R., Sherbakova, L. M., & Minina, A. A. (2020). Legal Approaches to Artificial Intelligence Concept and Essence Definition. Journal San Gregorio, 41, 115–121.

  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

  • Calo, R. (2015). Robotics and the New Cyberlaw. Californian Law Review, 3, 513–563.

  • Channov, S., & Dobrobaba, M. (2020). Reasons and Conditions for Disciplinary Offenses in the Civil Service System. Advances in Social Science, Education and Humanities Research, 498I, 65–70.

  • Coeckelbergh, M. (2019). Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability. Science and Engineering Ethics, 4, 2051–2068.

  • Denisov, N. L. (2019). Conceptual Foundations for Forming an International Standard in Establishing Criminal Responsibility for Acts Involving Artificial Intelligence. International Criminal Law and International Justice, 4, 18–20.

  • Dvorsky, G. (2012). When the Turing Test is not enough: Towards a functionalist determination of consciousness and the advent of an authentic machine ethics. http://www.sentientdevelopments.com/2012/03/when-turing-test-is-not-enough-towards.html

  • Escola v. Coca Cola. 24 Cal. 2d 453, 150 P. 2d 436 (1944). http://online.ceb.com/calcases/C2/24C2d453.htm

  • European Parliament. (2017). Civil Law Rules on Robotics. European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). http://robopravo.ru/uploads/s/z/6/g/z6gj0wkwhv1o/file/oQeHTCnw.pdf

  • Gouveia, S. (2019). A defense of the principle of distributed responsibility in artificial intelligence. Romanian Journal of Philosophy, 2, 235–249.

  • Iriskina, E. N., & Belyakov, K. O. (2016). Legal aspects of civil liability for damage caused by the actions of a robot as a quasi-subject of civil-law relations. Humanitarian informatics, 10, 63–72.

  • Kamalova, G. G. (2020). Some issues of criminal-legal responsibility in the application of artificial intelligence and robotics. Bulletin of Udmurt University. Series Economics and Law, 3, 382–388.

  • Kibalnik, A. G., & Volosyuk, P. V. (2018). Artificial intelligence: questions of criminal-legal doctrine, waiting for answers. Legal science and practice: Bulletin of Nizhny Novgorod Academy of the Ministry of Internal Affairs of Russia, 4, 173–178.

  • Marr, B. (2018). The Key Definitions of Artificial Intelligence (AI) That Explain Its Importance. https://www.forbes.com/sites/bernardmarr/2018/02/14/the-key-definitions-of-artificial-intelligence-ai-that-explain-its-importance/#4da358124f5d

  • Migurenko, R. A. (2010). Human Competences and Artificial Intelligence. Proceedings of Tomsk Polytechnic University. Engineering of Georesources, 6, 85–89.

  • Morhat, P. M. (2018). Liability of third parties for committing harming actions by units of artificial intelligence. Public Service and Personnel, 3, 47–49.

  • Naumov, A. V. (2015). Criminal Liability of Legal Entities. Lex Russia, 7, 57–63.

  • Neznamov, A. V. (Ed.). (2018). Regulation of robotics: an introduction to “robot law”. In: Legal aspects of development of robotics and artificial intelligence technologies. Infotropic Media.

  • Neznamov, A. V., & Smith, B. W. (2019). Robot is not to blame! A View from Russia and the United States on the Problem of Liability for Damage Caused by Robots. Law, 5, 135–156.

  • Owen, D. G. (2009). Figuring Foreseeability. Wake Forest Law Review, 44, 1277–1307.

  • Pagallo, U. (2013). The Law of Robots: Crimes, Contracts, and Torts. Springer.

  • Popov, S. N., & Ulyanova, N. A. (Ed.). (2006). Jurisprudence. Textbook. Publishing house of ASAU.

  • Searle, J. R. (1992). The rediscovery of the mind. A Bradford Book. The MIT Press: Massachusetts Institute of Technology.

  • Shchitova, A. A. (2019). On the potential legal capacity of artificial intelligence. Agrarian and land law, 5, 94–98.

  • Sitnikova, M. P. (2021). Influence of genetics on the criminal behavior of an individual. Medical Law, 1, 49–54.

  • Tolstov, I. E., & Sergeeva, A. S. (2015). Cultural aspect of the evolution of ideas about robots. New View. International scientific herald, 9, 46–57.

  • Yastrebov, O. A. (2018). Legal entity's legal personality: theoretical and methodological approaches. Proceedings of the Institute of State and Law of the Russian Academy of Sciences, 2, 36–55.

  • Yurenko, N. I. (2017). Robots – potential subjects of law: a myth or reality. In: Innovations in Science and Practice (pp. 45–51). LLP “Dendra”.

Copyright information

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

About this article

Publication Date

31 March 2022

eBook ISBN

978-1-80296-124-9

Publisher

European Publisher

Volume

125

Print ISBN (optional)

-

Edition Number

1st Edition

Pages

1-1329

Subjects

Cite this article as:

Lipchanskaya, M. A., Eremina, M. A., & Privalov, S. A. (2022). Artificial Intelligence Responsibilities: Ethical And Legal Issues. In I. Savchenko (Ed.), Freedom and Responsibility in Pivotal Times, vol 125. European Proceedings of Social and Behavioural Sciences (pp. 1-11). European Publisher. https://doi.org/10.15405/epsbs.2022.03.1