Artificial intelligence and robotics are among the most discussed issues and technological trends around the world today. In the light of their geometrically accelerating implementation in all spheres of human activity, often the expected opportunities, achievements and scientific breakthroughs overshadow the reasonableness and expediency of using artificial intelligence technologies in a particular area from a legal and ethical point of view. The article discusses the main directions of the spread of artificial intelligence technologies and the ethical consequences and moral issues that arise in this regard, both at the state and organizational level. The main trends characteristic of the labor market that arise in the process of robotization of workplaces and the introduction of intelligent robots into the production process are studied. The authors convincingly prove the priority of ethics and human safety issues in the design and implementation of AI (artificial intelligence) systems. During the discussion of ethical problems of implementing artificial intelligence in organizations, the emphasis is placed on the use of these technologies not from the point of view of automation and improving the efficiency of performing direct management functions, but from the point of view of organizing the work of personnel. Based on this, the article concludes with recommendations for the development of ethical principles adapted to the design and use of AI systems.
Keywords: Artificial intelligenceethicsnormmoralityrobotics
Recently, employers and HR specialists have been paying more and more attention to the implementation of technologies based on artificial intelligence in various areas of the economy. AI ethics is a field related to the study of ethical issues in AI (Suen et al., 2020). With the development of AI, the problems associated with its use are also growing. Despite the fact that the concept of “machine ethics” was proposed around 2006, the ethics of AI is still in its infancy.
Modern innovative technologies based on AI have achieved significant results, such as facial recognition, medical examination, and the use of drones. In social issues, companies use surveys and feedback tools to get staff opinions, new tools that track emails and the communications network between people in and out of the company. Special programs collect data on travels, location and mobility, and organizations in particular and the state in general now have data on well-being, physical form and health, income and expenses and other actions of people. In this regard, the results of the AI ethics studies (Siau & Wang, 2020) are of great importance today. To this we add the mandatory study of aspects of legal support for the use of AI, and the transformation of corporate culture in connection with the emergence of AI technologies (Sinha et al., 2020). But first of all, we are interested in works devoted to the study of the influence of AI on human activity (Vinichenko et al., 2021). Researchers draw attention to the fact that the development and spread of AI largely depends on the degree of readiness of society to consume this product (Symitsi et al., 2021), and also warn about the possible negative impact of AI on employment and health of people, the risks associated with job losses, bias, desocialization, violation of the confidentiality of personal information, and others (Sadovaya, 2018).
In this article, we will look at the potential ethical dilemmas and moral issues that can be caused by the spread of AI technologies. In the context of problems for society, the article will consider the potential impact of artificial intelligence on the labor market, as well as the consequences of the introduction of AI in the workplace. In addition, our intention is to demonstrate the need to expand the study of the range of AI ethics issues, since the tools and methods used in AI technologies can be misused by the authorities and others who have access to them. In this regard, we consider it necessary to establish the principles of their use. From the point of view of the average consumer, the issue of AI ethics is most often related to the confidentiality and security of personal data. Much attention should be paid to discussing the use of AI technologies in managing people not only at the level of society, but also at the level of an individual company. Today, AI technologies are actively used in the issues of attracting staff, managing employee engagement and internal mobility, in which objectivity is also no less important for making justified and effective management decisions. It seems to us that as a logical conclusion of this work, it is necessary to form directions in the field of development of new standards and rules of ethics of AI.
When defining the main issues of our research, we came to ambiguous conclusions. Proponents of the spread of AI emphasize the importance of its implementation in all spheres of human life and society, since its spread in various areas promises huge benefits for economic growth, social development, as well as improving the well-being and security of people. However, on the other hand, the low level of explainability, data distortion, data security, data privacy, and ethical issues of AI-based technology pose significant risks to users, developers, humanity, and society. Undoubtedly, today the use of AI allows to solve many not only production, but also management tasks much more efficiently. However, the development of these technologies entails a number of legal and moral and ethical problems. From the point of view of the average consumer, the issue of AI ethics is most often related to the confidentiality and security of personal data. Aspects of the dissemination of personal information, the transfer of personal data to third parties, the misuse of data about subjects, etc. are most often discussed today in the context of social networks and search sites. Under the scope of public opinion get a large IT companis such as Google, Facebook, Yandex, Apple, Microsoft, etc. There is also often a question of the legality of the use of facial recognition systems and the security of data obtained from surveillance cameras. You can also hear criticism of organizations working in the field of sales, in connection with their use of data on customer preferences, search queries, etc.
In addition, among the large-scale problems that arise in connection with the use of AI technologies are the possibility of real threats to people's lives, increased bias and discrimination, difficulties in determining the subject responsible for decisions made when using AI technologies, risks of desocialization of both individuals and groups. From the point of view of personnel management, the ethics of AI is also considered more often from the standpoint of the ethics and legality of the processing of personal data of employees and the protection of their privacy rights.
Purpose of the Study
The ever-growing demands of modern man to increase the rate and efficiency of decision-making and information transfer, comfort, and convenience of professional and personal life, speed of moving in space has spawned a number of innovations in the field of AI. The main purpose of this work is to study the theoretical aspects of the discussion that is being held around AI technologies and their impact on society in the present and future time and practical solutions created to date in this area in the world experience. Our goal is not so much to address the excitement associated with the benefits that developers and designers of AI technologies advertise, but to draw the attention of readers to the persistent calls for clarifying the guidelines for AI ethics, encouraging the adoption of ethical standards and the adoption of legally binding tools for regulating the use of AI technologies.
Ethical standards for AI, if any, are still largely self-regulating, and there is a growing demand for increased government oversight on this issue. It also seems to us that another key issue in the application of AI technologies is to understand what happens when AI replaces or complements human free will? This question is at the base of understanding ethics. In this regard, there is a need to develop recommendations aimed at finding ways to create an ethical, human-oriented AI, which should be designed and developed in accordance with the values and ethical principles of the society or community that it affects. Ethics, in our opinion, should be built into the design and development process from the very beginning of the creation of AI.
To determine the range of the main topical issues of AI ethics, we conducted a representative review of scientific and practical literature and Internet sources. Its implementation made it possible to form a clear vision of the problems associated with the topic of AI ethics under consideration. When using this method as a synthesis of the AI research question, we found repeated confirmation of the relevance of further research and development of new values, principles and methods that will guide the moral behavior of AI when it is implemented in practice. We must monitor/verify / implement the activities of any particular AI technology to fully understand its behavior and to ensure that it does not violate our (human) moral compass. The analysis and integration of the most significant publications, such as (Rubeis, 2020; Simonova et al., 2020), gave us the opportunity to conclude that ethics should be embedded in the idea of why a certain technology equipped with AI is being developed, since none of the innovations in the professional and social spheres is today more transformative and destructive than the explosive impact of AI, because it is capable of changing everything we do. A study of the precedents presented in various sources of information reviewed by us has shown that the four main categories of risk are particularly relevant and require mandatory ethical regulation when AI solutions are used for the social good:
-safe use and security;
-"explainability” (the ability to identify a function or set of data that leads to a particular decision or prediction).
We also focus on statistical databases that are relevant to the social and behavioral aspects of AI technology adoption. Therefore, when writing the paper, we also used the results of research carried out by the All-Russian Public Opinion Research Center through interviews within the framework of the project "Digitalization and artificial Intelligence" over the past few years, from 2017 to the present. The survey is aimed at identifying the opinion of the population and employers regarding the use and dissemination of technologies using AI and robotics.
The results of the study allowed us to share moral and ethical issues of AI technology use in two major areas – ethics of AI technology in the social scale, and the scale of individual companies. Among the main global problems – the development of Lethal Autonomous Weapons Systems (LAWS), drones for contract killings, in particular, or another return to the armament race, but on the basis of the development of AI military equipment and cyber weapons. At the beginning of 2019, the United States of America, China, India, France and the United Kingdom announced plans to increase the introduction of AI in military developments (Symitsi et al., 2021).
However, not only military technology using AI can pose a threat to life, but also domestic or industrial, since AI does not have self-awareness, nor does it have what is called "empathy" - the basis of ethics. The standards of a person's moral compass are highly depend on their upbringing and environment, moral principles, and religious beliefs, but most people have one of these "compasses". It's also what companies build their ethics and compliance on, what's right and what's wrong, and how we set rules based on that. AI lacks such a "compass" - a moral guide. In fact, it has no guide. The AI can only separate right from wrong based on data that has a “right " label and a “wrong” label attached to it. The only moral guideline that exists when it comes to AI is its developer, who then sets the bar for what is right and what is wrong. Today, the possibility of putting driverless cars on the roads is widely discussed. They really are one of the real threats to people's lives, as in the event of controversial emergencies, it is quite difficult to predict the behavior of the machine. In this regard, quite often now an example is given with a trolley that rides on rails and is controlled by AI. On the way of the trolley there are five people tied to the rails. In this case, the trolley can turn to another path, where there is only one person in a similar position. If the trolley was operated by a person, then, making his decision, he would act one way or another, based on a variety of internal and external factors and conditions. Belonging to a social group, the formed system of internal attitudes, morality, etc. will determine the choice of a person, in contrast to AI, which will make a choice in accordance with the embedded algorithm. The Massachusetts Institute of Technology even offers everyone to deploy their script on a site dedicated to this issue with a trolley.
In the case of driverless cars, you can give actual examples from the practice of foreign countries. So, the company "Mercedes" declared the priority of passenger safety in any circumstances. But in contrast, the Chinese Social Rating System, based on data of citizens' law abidance, suggests that manufacturers of driverless cars in the case of a controversial emergency can create a principle of guilt of a person with a low social rating. The next ethical problem associated with the collection of data from users, buyers and consumers goes far beyond the simple processing of information in order to further improvement of products, advertising companies, etc. They can be directly used to manipulate people's behavior and mislead them. The main ethical question that arises in this regard is personal life. A wide range of stakeholders own, control, collect or generate data that can be deployed for AI technologies. Governments are among the most important collectors of information, which can include data on taxes, health and education. Huge amounts of data are also collected by private companies-including satellite operators, telecommunications firms, utilities and technology companies that run digital platforms, as well as social media sites and search operations. As an example confirming the fact of active transfer of personal confidential information on the Internet, we present the data of the All-Russian Public Opinion Research Center survey (VCIOM, 2020), which clearly shows that about 60% of Russian citizens regularly make bank and utility payments, tickets purchase for transport and entertainment events, transfer of information about the consumption of heat and electricity, order a taxi and register for various events or medical institutions using information technology. These datasets contain highly confidential personal information that cannot be shared without anonymization. But private operators can also commercialize their data sets, which therefore may not be available for free socially useful cases. Overcoming this accessibility challenge will likely require a global call to action to record data and make it more accessible to well-defined public initiatives. Data collectors and generators should be encouraged—and possibly empowered-to open up access to subsets of their data when it can be used in the public interests, through the creation and adoption of International Charters.
Of course, let's not forget about the most basic fear of humanity, which is theoretically quite correct – it is possible that the end of the era of human domination on Earth is near, if AI systems get out of control of people. Undoubtedly, at the moment, such an assumption is still closer to fiction than to reality, but, in general, this is a very possible development scenario, given the fact that even today some developers can not fully explain on what principle the created by them AI works. This applies, for example, to modern social networks. The faster and more actively machine learning technologies develop, the more strict state control should become and the more proactive legislative regulation should become. Of course, in this article we do not mean taking people hostage by machines, but, for example, spreading deliberately false content, manipulating and inciting people, for example, to genocide is quite possible. The moral and ethical issues described above affect society as a whole and are national problems. But, looking below, within specific organizations, the ethical issues of using AI are also very relevant.
Contrary to many expectations, the use of AI can lead to an increase in bias and discrimination, rather than fighting it (this aspect is due to the fact that intelligent systems are trained on data already available and entered in any form by people, respectively, discrimination in any form preceding machine learning will only increase in "machine thinking"). Let's suppose, for example, that a company uses Pymetrics, HireVue, or other advanced evaluation technology to select job candidates. These manufacturers work hard to remove racial, gender, and generational biases from their tools. However, implementing the programs, there is a risk of repeating a mistake similar to that made by Amazon, which inadvertently created its own gender-biased hiring system. The company's human resources specialists found a big problem: their new recruiting machine "didn't approve" the female candidates. The team has been creating computer programs since 2014 to check applicants' resumes in order to automate the search for the best talents. Automation has been key to Amazon's e-commerce dominance, whether it's inside warehouses or making pricing decisions. The company's experimental hiring tool used AI to give job candidates ratings from one to five stars – much like shoppers rate products on Amazon.
Recruitment specialists wanted a selection tool that would select the top five out of a hundred resumes in a few seconds, and the company would hire them. But a year later, the company realized that its new system does not evaluate candidates for software development and other technical positions in a gender-neutral way. This turned out to be due to the fact that Amazon's computer models were trained to test candidates by collecting patterns in resumes submitted to the company during the previous 10-year period. Most of the candidates are men, which is a real reflection of the predominance of men in the technological industry. Amazon has edited the programs to make them neutral to these specific conditions. But this did not guarantee that the AI will not come up with other ways to sort candidates that may be conditionally discriminatory. The company eventually disbanded the team at the beginning of last year, because HR managers have lost hope for the project. Amazon recruiters looked at the recommendations generated by the tool when searching for new employees, but never completely relied solely on these ratings.
Another important moral and ethical problem of personnel management can be the excessive use of AI technologies and the loss of normal human contact. This problem is similar to the problem of desocialization, which is relevant in society. In addition, excessive digitalization and robotization of communication processes can cause a lack of team spirit and a sense of social community in organizations, which in certain conditions is a negative factor that reduces the efficiency of production processes. Excessive use of AI technologies, avoiding personal communication within the team can also serve to reduce the trust of employees to each other, in the company and in management (Siau & Wang, 2020). This phenomenon, in turn, will affect another moral and ethical problem of using AI in personnel management – its impact on the corporate culture formed in the organization.
Rapid progress in AI and automation technologies may lead to another significant problem from the point of view of the ethics of AI technologies – a significant transformation of labor markets. While AI and automation can increase the productivity of some workers, at the same time they can replace the work performed by others, and, with a high degree of probability, transform almost all professions to various degrees. Since AI is only in its initial state, empirical data on the displacement of humans from the workplace by robots is not yet available. Therefore, it is now impossible to give a serious signal about the potential impact of AI on employment. In this regard, our empirical research has focused on automation in a broad sense and its impact on employment. Some consequences of automation were highlighted. The coming threat is highlighted by the results of a study by the McKinsey Global Institute, announced in the annual Report on Automation, based on research on the labor markets of the developed world countries, income, skills and an expanding range of work models, including the gig economy, as well as the potential impact on the global economy of digitalization, automation, robotics and AI (McKinsey, 2017). In this report, McKinsey suggests that by 2030, intelligent agents and robots could replace up to 30 percent of the world's current human labor. McKinsey estimates that, depending on various implementation scenarios, utopia will displace between 400 and 800 million jobs by 2030, requiring a full transition from 375 million people to other job categories. Such a shift cannot fail to cause significant concern, especially for vulnerable countries and the world's population.
In addition, we will formulate the main potential trends of the labor market noted in connection with the spread of AI:
- The disappearance of manufacturing mechanical jobs as a result of the introduction of AI technologies (Chang, 2020);
- Increasing the wage gap by increasing the return on education (Hardcastle & Ogbogu, 2020);
- polarization of the labor market, as the development of AI technologies leads to an increase in the number of highly skilled and low-skilled jobs, while displacing medium-skilled jobs (Vinichenko et al., 2021).
Another component of AI ethics, in addition to AI technologies, is roboethics, which studies the ethical problems that arise when designing and operating intelligent robots in real life. From the point of view of the content of work, it is impossible to consider AI without regard to the robotization of workplaces. Since on the part of the employee-user, the content of labor activity is complicated precisely due to the robotization of part of the processes performed by him, or due to the need to interact with physical robots. Organizations that are currently implementing innovative technologies approach the issue exclusively from the standpoint of efficiency and profitability: processes that currently require large amounts of resources, but which bring qualitatively small returns, are automated, which entails the release of experts to work on more important and strategic tasks. However, little attention is paid to the quality of the working life of personnel in the conditions of widespread robotization. This poses some interesting dilemmas for HR professionals as to when and how task automation is most appropriate and effective:
- when repetitive tasks are overloaded and no longer profitable for the organization?
- when is it necessary to release personnel burdened with operational issues to solve more important strategic tasks?
- or when it is necessary to reduce the risks of business processes that are subject to human factors and errors?
We also find the ethical question that confronts the management staff on the digitization of business processes, namely:
- if automation brings people greater well-being and a sense of security, is this reason enough for automation even with high costs or minimal increase in the productivity/profitability of the company? Can commercial, temporary or financial results give way to the well-being and quality of working life of the staff?
Unfortunately, many organizations have not yet reached the stage where results and employee satisfaction are higher than financial or operational results. Despite the fact that today many employees face tasks that require both physical and psychological stress and which, if automated, would most likely ease many of the symptoms of staff stress, organizations do not take any action and do not introduce innovative technologies, because they do not see financial benefits.
Development of AI ethics should focus on equal human rights and opportunities, from obtaining appropriate education and job security to ensuring the confidentiality of information and the security of the person himself.
Studying the discussion that is going on around the issues of AI ethics allowed us to formulate the main principles of AI technology development, most often called by experts in the field of AI ethics: transparency, fairness and honesty, non-infringement of rights, responsibility and confidentiality. Based on the generalization of the results of the study of practical experience and publications devoted to the issue under consideration, it is possible to form a list of rules that are imposed to AI technologies so that they can be considered ethical and reliable:
- AI technologies are aimed at empowering people, but appropriate tools should be provided to control their use;
- AI technologies must be stabile and secure to minimize unintended potential harm;
- ensuring the confidentiality and integrity of data, ensuring legal access to them.
- when developing AI, unfair bias should be avoided, as it can have numerous negative consequences, exposing prejudices and exacerbating discrimination against individual social groups;
- availability of AI technologies to all interested consumers, regardless of race, geographical location or disability;
- AI technologies should ensure the social and environmental well-being to all people, including future generations.
Our study is an assessment of the views of domestic and foreign experts on the content and current level of development of AI ethics. A limitation of the current study is the lack of generalized empirical data, which is partly mitigated by the inclusion of practical information about the potential and area of the ethics of AI technologies in the article. In our opinion, the issues of AI ethics should form the basis for the restructuring of social mechanisms in order to cover a number of new scenarios and situations. This situation is due to the fact that policies on the labor market and in the field of education, which are not based on ethical foundations, reduce the positive impact of AI and robotics on employment in particular and on the global economy as a whole. It is necessary to review the market mechanism of the labor market and its regulation, the corporate culture and management system of companies, as well as the content of labor contracts in the coming era of automation and robotization of work processes based on the ethical, social and legal aspects of AI systems, if we want to avoid unintended negative consequences and risks arising from the introduction of AI into society.
According to forecasts, by 2025, more than 60% of the largest companies in various areas of production, health, logistics, agriculture and electricity will introduce the position of director of robotics, whose main task will be to develop a strategy for automating production using robots, as well as reducing staff concerns about possible job losses and changes in the quality and content of labor (Chang, 2020).
For effective work and a well-established production process, such specialists, together with HR services, should develop principles and norms of human-robot interaction, including:
- ethical behavior and codes of conducting between humans and robots;
- definition of policies and procedures for modifying business processes in the light of the introduction of AI and robotization of production;
- principles and programs for adapting employees to the changing robotic work environment;
- identification and resolution of problems related to threats to the physical and mental health and well-being of employees, as well as their human dignity;
- principles of management and interaction with robots;
- ethical design, rules for interaction and adaptation of new robots;
- management structures to address legal, social, transparency, trust, predictability, confidentiality, and robot dismissal issues;
- reorganization of decision-making processes and consolidation of human leadership positions over the robot;
- regulation of material responsibility when working with robotic systems;
- development of new standards of employee behavior in relation to AI and robots;
- determining the standards of conduct of the employer in relation to employees, as well as the legal responsibility of the company to the staff;
- creating an innovative corporate culture that promotes innovation and technological transformation and prevents staff resistance to robotization, etc.
In the world of AI and algorithms, HR professionals will have to make many decisions that will have a long-term impact on the quality and type of work performed in organizations. Without giving the HR service a significant role in these processes, society as a whole risks finding itself in a situation where all decisions on digitalization and automation of workplaces will depend only on financial results, which cannot have a positive impact on future jobs and the overall quality of working life in society. Therefore, in the modern conditions of widespread digitalization and robotization, the HR specialist becomes the guardian of human values, whose defining role should be reduced to compliance with moral norms and principles in the new corporate culture of the organization formed by innovations. In conclusion, we would like to note once again that AI has a huge potential, and its responsible implementation depends only on us.
- Chang, К. (2020). Artificial intelligence in personnel management: The development of APM model. The Bottom Line, 33(4), 377-388.
- Hardcastle, L., & Ogbogu, U. (2020). Virtual care: Enhancing access or harming care? Healthcare Management Forum, 33(6), 288-292.
- McKinsey (2017). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. https://www.mckinsey.com/~/media/mckinsey/industries/ public%20and%20social%20sector/our%20insights/what%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/mgi%20jobs%20lost-jobs%20gained_report_december%202017.pdf
- Rubeis, G. (2020). The disruptive power of artificial intelligence. Ethical aspects of gerontechnology in elderly care. Archives of Gerontology and Geriatrics, 91, 104186.
- Sadovaya, E. S. (2018). Digital economy and the new labor market paradigm. World Economy and International Relations, 62(12), 35-45.
- Siau, K., & Wang, W. (2020). Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management, 31(2), 74-87.
- Simonova, M., Lyachenkov, Y., & Kravchenko, A. (2020). HR innovation risk assessment. In S. Kudriavtcev & V. Murgul (Eds.), Key Trends in Transportation Innovation, E3S Web of Conferences, 157 (04024). EDP Science.
- Sinha, N., Singh, P., Gupta, M., & Singh, P. (2020). Robotics at workplace: An integrated Twitter analytics – SEM based approach for behavioral intention to accept. International Journal of Information Management, 55, 102210.
- Suen, H. Y., Hung, K. E., & Lin, C. L. (2020). Intelligent video interview agent used to predict communication skill and perceived personality traits. Human-Centric Computing and Information Sciences, 10, 3.
- Symitsi, E., Stamolampros, P., Daskalakis, G., & Korfiatis, N. (2021). The informational value of employee online reviews. European Journal of Operational Research, 288, 605-619.
- VCIOM (2020). Extended data collection. To the thematic issue "Digitalization and artificial intelligence" (#3).https://wciom.ru/fileadmin/file/nauka/podborka/rasshirennaya_podborka_ dannyh_wciom_102020.pdf
- Vinichenko, M. V., Narrainen, G. S., Melnichuk, A. V., & Chalid, P. (2021). The influence of artificial intelligence on human activities. Frontier Information Technology and Systems Research in Cooperative Economics, Studies in Systems, Decision and Control, 316, 561-570.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
About this article
30 April 2021
Print ISBN (optional)
Socio-economic development, digital economy, management, public administration
Cite this article as:
Leonov, V. A., Kashtanova, E. V., & Lobacheva, A. S. (2021). Ethical Aspects Of Artificial Intelligence Use In Social Spheres And Management Environment. In S. I. Ashmarina, V. V. Mantulenko, M. I. Inozemtsev, & E. L. Sidorenko (Eds.), Global Challenges and Prospects of The Modern Economic Development, vol 106. European Proceedings of Social and Behavioural Sciences (pp. 989-998). European Publisher. https://doi.org/10.15405/epsbs.2021.04.02.118