The scientific paper presents the author's understanding of artificial intelligence. This so-called “product” relevant to an individual, society, state power, peace, and mankind security can objectively become socially dangerous. It can occur at the stages of artificial intelligence model development, its production and operation. This leads to the need for legal regulation and possible criminal protection of public and environmental safety. Legal regulation is necessary at all stages and criminal protection can be relevant only at the stage of production and operation. Artificial intelligence can be the object of criminal law relations, but not their subject. The presented research work does not propose draft elements of crimes for the protection of AI production and operation relations as there is no clarity about the types and nature of socially dangerous infringements in these areas. This excludes the specific content of corpus delicti necessary in its criminal protection. Certain criminal protection becomes possible only with the practice of publicly unsafe violations in these relations. At the same time, the criminalization of AI production and operation relations in accordance with criminal law principles will allow to deter individuals from acts that create social serious risks for artificial intelligence and to prevent them. This, in turn, will give regulatory branches of law greater opportunities for the artificial development at the level of appropriate models formation, production and operation.
Keywords: Artificial intelligence, natural intelligence, object of criminal law relations, subject of criminal law relations
Artificial intelligence (AI) is increasingly represented in real life and, perhaps, it will soon become an inalienable part of human activities in various fields: social, economic, medical, biological, environmental and other. The development of artificial intelligence is aimed at the growth of a person and the relationships in which he/she is a participant. It is assumed that it will give greater creativity to the individual, society, state power, and the world as a whole. At the same time, artificial intelligence can be socially dangerous to humans, the environment, and the world. And already at this stage, it is necessary to exercise legal influence on the regulating process in the artificial intelligence development, its production and subsequent operation. It is extremely important to determine the criminal law prevention methods avoiding possible life-threatening social risks of artificial intelligence.
It is necessary to get a theoretical understanding of artificial intelligence within the system of criminal law relations.
the concept and nature of artificial intelligence;
the artificial intelligence position in the system of criminal law relations.
Purpose of the Study
It is connected with the establishment of artificial intelligence status in the system of criminal law relations.
The methods of induction, deduction, system analysis, hypothesis formation and hypothesis testing were used in the research. The author also used the methods and principles of determinism. The comparative legal method was used to establish the specific features of artificial intelligence. The application of these methods identified a number of issues that require detailed scientific analysis.
Artificial intelligence (AI) is increasingly represented in real life and, perhaps, it will soon become an inalienable part of human activities in various fields: social, economic, medical, biological, environmental and other. At the same time, its volume under certain conditions, can be equal to human activity and even exceed it. In this regard the problematic issue is what is an Artificial Intelligence and how is it related to law and traditional subjects of law? (Nemitz, 2018; Surden, 2019).
It must be understood that this kind of intelligence can manifest itself not only positively, but also negatively. This is clearly seen by the example of natural intelligence, which, under certain circumstances, does not fulfill its inherent creative properties, and on the contrary, carries out destructive ones. The following question is quite obvious: can AI have the features of natural intelligence, that is, not only to create, but also to destroy? If the answer is linear, then, of course, it can. The introduction of a significant problem in the legal field involves the AI concept development and the identification of its nature in the criminal law aspect (Khisamova et al., 2019).
In the phraseology of "artificial intelligence", the key is "artificial". The word "intelligence" is generally associated with a person, i.e. human intelligence is a natural phenomenon. The essential feature of natural intelligence is its self-sufficiency, which ensures its existence in time and space, development, interaction with other intelligences, animate and inanimate nature. And in this sense, it cannot be outside of a person, nor can it be biological.
Is it possible to call the intellect which is created by a man? It is highly unlikely, since the created product does not have biological properties, as a result, it does not have self-sufficiency, which justifies its artificiality. The word "artificial" in relation to the intelligence means that its source is not in biosocial properties of a person, but the intellectual energy of an individual, which is a consequence of different biosocial processes.
There are various approaches how to define artificial intelligence and many of them are technotized (Mills & Uebergang, 2017; Schatsky et al., 2014). In our opinion, AI is a product created by a person on the digitalization basis having limited independence in time, space, nature and types of systemic activities carried out on a moral basis for the social benefit of individual, society, state power, peace, and mankind security.
The artificial intelligence development suggests a moral basis. This feature means that the created product has not only scientific characteristics, but also social ones relevant in certain areas of human life. Hence the conclusion: the digital (technological) component cannot have properties that lead to the product’s aggressiveness, violence and destructiveness, including the threat of their manifestation to the surrounding reality, animate and inanimate nature, and a human being. At the same time, we are not talking about blocking the implementation of the above-mentioned properties by the product, but about their absence in the product itself. Artificial intelligence which has destructive attributes is anti-moral.
An important feature is the social purpose of the product. The available definitions of artificial intelligence usually do not include its functioning aim. We believe that the lack of a goal practically excludes the social significance of artificial intelligence. If a particular phenomenon (physical, intellectual, material or spiritual) is a carrier of social properties, then it has to have a purpose. To have social features means to have social significance: to carry out activities in which a person, society, state power, and in some situations the whole world and security of mankind, are objectively interested. The fact that there is an artificial intelligence purpose reflects several fundamental points. First one is that the product acts as a socially significant factor – it participates in social relations in the form of an object. Secondly, it cannot be socially dangerous for the surrounding reality. Thirdly, it is an object of human control.
The purpose of artificial intelligence is its social effectiveness – usefulness for the individual, society, state power, whole world and security of a human being. The useful practicality of artificial intelligence cannot be limited by national borders. It should be of an international character, as it is stated in the Russian Constitution, where generally recognized principles and norms of international law and international treaties of the Russian Federation are an integral part of its legal system.
These features together reveal not only the special (technical), but also a social aspect of AI distinguishing it from the natural intelligence and avoiding possible socially dangerous risks.
It is important to identify not only the concept of artificial intelligence but also its nature. It involves: the source of artificial intelligence formation; the difference from natural intelligence; the directions of its research and practical implementation; technological and legal mechanisms for mitigating risks, protecting such relations in which the product’s action can create socially dangerous risks.
The nature of AI is expressed by its social-digital properties. The digital characteristics of the product show that its source is human intelligence and therefore it cannot acquire its natural properties. The social product features reflect several aspects. Firstly, the product is caused by the need to strengthen the human importance in the sphere of animate and inanimate nature, public, state, social, international and other relations. Secondly, the product sociality forms the directions of its technological improvement and practical implementation. This increases the risks of possible public danger both from the product and in relation to it.
The application of regulatory and protective branches of law in relation to AI largely depends on its social status. This status should be understood as a product state in which its formation, existence, development, and connection with other phenomena, including society, is objective. The social status of a product is determined by its purpose, in fact, by its societal function that ensures the interests of a person. Social status by itself does not form a mechanism that, on the one hand, would keep the product from committing offenses or, if they are committed, would impose appropriate liability. However, the product exists and realizes itself; it is formed and developed by a person and for a person and that requires certain measures to ensure its security and public status.
In connection to this aspect, modern civilization has legal systems divided into regulatory and protective ones. Regulatory systems govern relevant social relations by giving legal status to individuals and legal entities enabling to create all kinds of products, to develop and adapt it to the needs of an individual, nature, and society. Protective ones deal with relations in positive branches of law: imposing obligations on individuals, and often on legal entities to refrain from socially dangerous activity, and in cases of its violation - obligations to be the subject of legal liability.
The described situation determines the possibility of having a legal status, including regulatory and protective branches of law. This product is not intellectual in the human-cognitive sense (Surden, 2019). It does not belong to the biological ones and cannot be a holder of consciousness and will, and these three factors themselves form moral and legal regulations of the holders’ behavior.
The conclusion that AI cannot possess these factors to some extent contradicts the optimistic viewpoint that robotics development will lead to their acquisition of legal entity features (Arkhipov & Naumov, 2017; Petrov, 2019). Is it possible that the increase in knowledge itself gives biological attributes to it? It seems not. Human biology, including its intelligence, consciousness and will, is "tied" to the animate and inanimate nature of the universe (Leontiev, 2016). Thus, the human intellect is somehow also correlated with it. Is it feasible that AI without biological features can correspond appropriately to animate and inanimate nature? The answer is also no and the positive reply will indicate the beginning of a new civilization.
The ideas mentioned above show that AI is not able to perceive law as a regulator of ensuring the individual interests, the intellect itself, society, state power, the world and humanity. Nor can it assess the legal mechanism for the listed values protection. Most of the areas where AI has been successful are highly structured ones with obvious right and wrong answers, followed from different basic conditions and accounted algorithmically (Surden, 2019). And even within these challenging types there is still a problem of discrimination by algorithms when the decision made by the AI is not fair (Alarie et al., 2017; Hacker, 2018).
It is possible to conclude that AI cannot be a liability subject for violating regulatory and protective norms in corresponding branches of law.
The author’s opinion is that AI has neither criminal law enforcement officer status nor its administrator’s one. This requires the intelligence of a sane person who has reached a certain age. Moreover, in certain crimes it has to possess a number of other qualities: citizenship, gender, official status, and so on. Both named legal bodies must be aware of their status. That is one of the factors of their involvement in solving criminal-legal problems.
The subject must have the same property when committing guilty, illegal, socially dangerous activity involving the threat of criminal punishment (law enforcement officer) or when carrying out criminal prosecution, instituting criminal liability and exempting from punishment (administrator of law). All this presupposes the consciousness and will of a man. However, this does not allow AI even in the future to get the abovementioned criminal-legal status.
The possibility of recognizing AI as one of the objects of criminal legal protection requires its consideration in three aspects: development, production and operation.
A product at the level of its development, covering experimental aspects, cannot be the object of criminal-legal relations. At this stage, regulatory branches of law are sufficient to prevent the inclusion of destructive components in product’s development. Such branches do not create problems in the formation and implementation of appropriate scientific ideas useful for artificial intelligence development. Criminal-legal protection in this case will really restrain these processes.
The artificial intelligence production can be the object of criminal law relations. In this regard, the regulatory branches of law alone are unlikely to ensure the quality of artificial intelligence, which is embedded in the project development model. In this connection, it is included in the number of criminal law relations objects. The “object” in this case is the form of AI production relations. A relationship protected by criminal law involves the creation of a quality product in accordance with the requirements of relevant standards.
The quality significance is also expressed in the fact that the output product cannot have characteristics of social danger to the society, animate and inanimate nature at its operation, storage and transportation. The object of criminal law relations also involves the relationship of artificial intelligence maintenance. This refers to relations that define a special order of human control over the product operation, its maintenance, development, interaction with other people, society, animate and inanimate nature, and other material products, including artificial intelligences of this or that kind.
The presented research work does not propose draft elements of crimes for the protection of AI production and operation relations as there is no clarity about the types and nature of socially dangerous infringements in these areas. This excludes the specific content of corpus delicti necessary in its criminal protection. Certain criminal protection becomes possible only with the practice of publicly unsafe violations in these relations.
At the same time, the criminalization of AI production and operation relations in accordance with criminal law principles will allow to deter individuals from acts that create social serious risks for artificial intelligence and to prevent them. This, in turn, will give regulatory branches of law greater opportunities for the artificial development at the level of appropriate models formation, production and operation.
The study was funded by RFBR, project number 20-011-00194 «Theoretical and methodological model of preventive law as a new legislative system».
Alarie, B., Niblett, A., & Yoon, A. (2017). How Artificial Intelligence Will Affect the Practice of Law. DOI:
Arkhipov, V., & Naumov, V. (2017). Artificial intelligence and autonomous devices in the context of law: on the development of the first Russian law on robotics. SPIIRAN studies, 6, 53–56.
Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185. https://kluwerlawonline.com/journalarticle/Common+Market+Law+Review/55.4/COLA201805
Khisamova, Z., Begishev, I., & Gaifutdinov, R. (2019). On Legal Regulation Methods of Artificial Intelligence in the World. International Journal of Innovative Technology and Exploring Engineering, 9(1), 5159–5162.
Leontiev, B. B. (2016). On the law complex discovery of intellectual nature and the justification of intellectual nature phenomenon. Legal information, 4, 10–76.
Mills, M., & Uebergang, J. (2017). Artificial intelligence in law: An overview. Precedent (Sydney, N.S.W.). Australian Lawyers Alliance. “Ovidius” University Annals, Economic Sciences Series, XVII(1), 356–360. https://search.informit.org/doi/
Nemitz, P. (2018). Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophical Transactions of the Royal Society. Mathematical, Physical and Engineering Sciences. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3234336
Petrov, V. (2019). The concept of artificial intelligence and legal responsibility for its work. Law. Journal of the Higher School of Economics, 32, 92.
Schatsky, D., Muraskin, C., & Gurumurthy, R. (2014). Demystifying artificial intelligence: what business leaders need to know about cognitive technologies? Deloitte Insights. www2.deloitte.com/insights/us/en/focus/cognitive-technologies/what-is-cognitive technology.html
Surden, H. (2019). Artificial intelligence and law: An overview. Georgia State University Law Review, 35(4), 1307–1336.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
About this article
31 January 2022
Print ISBN (optional)
Civilistic Doctrine, Digital Transformation, Sociocultural Transformations, Philosophy of Law, Public Authorities
Cite this article as:
Razgildiev, B. T. (2022). Artificial Intelligence: Nature And Place In The System Of Criminal Law Relations. In S. Afanasyev, A. Blinov, & N. Kovaleva (Eds.), State and Law in the Context of Modern Challenges, vol 122. European Proceedings of Social and Behavioural Sciences (pp. 503-508). European Publisher. https://doi.org/10.15405/epsbs.2022.01.80