You are here

Robotisation of Life: Ethics in View of New Challenges

2019, January

Introduction

The development of robotisation is linked to a number of factors.

  1. Given the complexity of tasks to be performed in a society, there is increasing reliance on sophisticated technological tools (for communication, transport, information processing etc.). These exceed the speed and precision of human actions and reactions, as well as memory and perception capabilities. In a complex and globalised society of increasingly interconnected actors, robotisation  transcends human physical and cognitive  limits in decision-making and regulation processes.
  2.  Robotisation furthers the aim of minimising production and labour costs.
  3.  Robotisation reduces dangers to which workers are exposed. This is particularly the case in potentially dangerous industries, as well as policing and the military (Cfr. The Humanization of Robots and the Robotization of the Human Person. Ethical Perspectives on Lethal Autonomous Weapons Systems and Augmented Soldiers, Geneva, The Caritas in Veritate Foundation Working Papers, 2017). The desire to increase performance and the profitability of processes in an increasingly complex and technology- enhanced society has led to (at least in societies that can afford sophisticated technological means) the gradual replacement of the human person by the machine.
    Robotisation has already the capacity to significantly help the medicine sector by recognising and detecting diseases through fast, efficient and standardised means. It also makes possible to offset handicaps (for example, exoskeletons, prostheses etc), to administer treatments automatically and to carry out surgery with a high degree of accuracy and precision, and to do this remotely. Despite the advantages of robotisation, it should be noted that it has developed within a culture that no longer tolerates the limits of the human person. Projects involving robot-assisted persons, or robotised (or augmented) human persons, are motivated by the desire to free humanity from biological constraints (for example, physical resistance, mental capacities, ageing etc) so as to be master of its being and becoming. Admittedly, this falls short of the ‘trans-‘ or ‘post-humanist’ utopian philosophies which permeate certain spheres of contemporary thought. Robotisation is nevertheless associated with and motivated by the idea that the human person is able to transform itself so as to escape its limited, fragile, biological condition, a condition which is considered unbearable and therefore to be overcome (D. LAMBERT, “Risques et espoirs d’un discours sur la vulnérabilité humaine” in Fragilité, dis-nous ta grandeur, Paris, Cerf, 2013, pp. 13-30).
  4. Robotisation develops in the context of the ‘anthropological crisis,’ understood to be a radical questioning of the identity and true reality of the human person. The intensification of robotisation, with the consequent redundancy or transformation of the human person, has implications for particular societies and certain population groups. Some societies cannot afford efficient robotisation and certain populations groups or classes which, because of economic reasons or personal physical or mental disabilities, are left behind due to their lack of access to technologies. The robotisation of life must therefore be wisely and critically considered as an opportunity but not as an absolute necessity (because it is related to certain particular interests) and with a concern for those potentially left behind. It must also be noted that robotisation is, in certain sectors, driven by factors which are themselves reinforced by the robotisation they have created.

 

Terminological clarification

Digital data systems enable intelligent processes to be replicated to an even greater degree. The term, ‘artificial intelligence’ is used as a generic term for these systems. This paper’s focus is the specific processes carried out by robots. A robot is a system that usually consists of three components: (1) a sensor which gathers information from its surroundings; (2) a processor, which processes that information; and (3) an effector which can interact with the surrounding environment.

Diversity and specificity of the ethical issues

The ethical considerations that arise in the context of robotisation are relevant more generally to the relationship between science and ethics. It must be acknowledged that the development of technology provides necessary support to individuals and society in the exercise of human responsibility. To that end, technological advances must not be demonised or rejected. What is necessary is a focused ethical analysis of the impact of accelerated and advanced process of robotisation on the individual and society.

 

Ethical Issues

Primacy of the person, recognition of the human dignity

Some contemporary scientists and philosophers claim that robots have a certain degree of autonomy in the sense that they are subjects that act. They can therefore be considered within certain limits as so called “ moral agents” in that they can make choices that can be assessed as good or evil. This would give rise to ethical problems. 
During life, a human person can find themselves interacting with these robotic agents. No longer is the human-robot relationship defined in instrumental terms, with use of the robot strengthening human action. Instead, if human persons seek to exercise control over their environment, they need to confer power on other entities, in this case, artificial entities. This requires accepting that, because of the increased autonomy and agency of such entities, human action is limited. At the same time, actions over which humans have control increase. This gives rise to a paradox: the more human power over the environment increases thanks to machines, the more human beings are deprived of agency and control. 
This paradox generates a sense of unease and powerlessness. The dignity and centrality of the human person is put into question. It is therefore necessary to extend the principle of good relations, which previously regulated human interaction with nature and other human beings, to include robots. 
In this regard, two steps must be taken, both of which are grounded in the idea of “creaturality”. 
First, just as human persons, in their freedom, deliberative decision-making process and autonomy, are creatures of God; so robots, despite their “autonomy”, are designed and programmed by humans. A human person and a cognitive machine have specific capacities to start processes; they can relate to and interact with each other; and most importantly, their activity can be subject to moral judgment and assessment as to whether it is good or evil – in the sense of harmful or harmless activity. Nevertheless, the machine only acts according to its original programming by a human person. Then, even if the machine can interact with and even assist human persons, it is not properly a moral agent and the ultimate responsibility always lies on humans. 
Second, and most important, what governs the relationship between humans and machines is the primacy and dignity of the human person. Although created, the human person is not only capable of relating on his or her own to other creatures (just as robots are, to a certain degree, also programmed to do), but also has the capacity to question the criteria and principles upon which to make decisions. The human person is capable of critical reflection and ethical decision-making, like Adam in the Garden of Eden (s. Gn 2). 
The human person is responsible for giving order and meaning to creation. Christian anthropology, itself rooted in the wisdom of the Judeo-Christian biblical tradition, articulates and develops a vision of the human person whose primary task it is to preserve and cultivate nature. This grounds an ethics which does not idealise nature in a sacral or romantic sense. It goes beyond mere preservation to practically cultivate, develop and increase creation. This dynamic sense of humanity’s role in creation supports not a conservative ethics, but rather a future-oriented one which is open to and responsible for creation as it grows and develops. This promotes an attitude toward science and technology which is fundamentally confident and welcoming of innovation. 
Moreover, it emphasises the value a person’s freedom and non-dependence upon the technology at their disposal. This is expressed in terms of a person’s critically reflective and evaluative attitude to the use (or misuse) of technology. 
The robot, at least in its present phase of development, is not capable of this. It can only follow the procedures for which it has been programmed. As a consequence, only the human person can be considered a ‘person’ in the proper sense, and in their full dignity.

 

Rights of robots

The wide and varied range of ethical challenges which arise from society’s use of robots comes to the fore in the ongoing debate as to whether robots should be accorded specific legal status and attendant rights. 
The European Parliament has recommended this in its Resolution on Civil Law Rules on Robotics (Cfr. European Parliament,  Resolution on Civil Law Rules on Robotic, 2017). It proposes that the most sophisticated, autonomous robots be given the status of ‘electronic persons’, responsible for making good any damage they cause. It further recommends that ‘electronic personality’ be applied to cases where robots make autonomous decisions or otherwise interact with third parties independently. 
It must be said, however, that the construct of legal status for robots is unconvincing. The human person is the foundation of, and central to every legal order. For a natural person, legal personality derives from their existence as human. That personality implies rights and duties that are exercised within a framework which recognises, respects and promotes human dignity. Placing robots on the same level as human persons is therefore at odds with Article 6 of the Universal Declaration on Human Rights, which states that «everyone has the right to recognition everywhere as a person before the law»
Calls for the extension of legal personality to robots runs contrary to, and undermines the very concept of responsibility, as it arises in the context of human rights and duties. Responsibility rooted in legal personality can only be exercised where there exists the capacity for freedom, and freedom is more than autonomy. 
Legal personality is assigned to a natural person (as the natural consequence of their being human) or to a legal person (in this case, even though a fiction, legal personality presupposes the existence of a natural person or persons acting behind the fiction). Legal personality for robots collapses the boundaries between humans and machines, between the living and the inert, the human and the inhuman (Cfr. Contribution of COMECE to the public consultation of the European Parliament Civil Law Rules on Robotic, 2017).

Some contend that rules of liability could be extended to robots in a way analogous to the rules which govern liability associated with animals. This would represent a perilous shift towards the recognition of robots as belong to the world of the living. It remains that existing legal frameworks which provide for natural and legal personality already have at their disposal viable legal solutions, not least provisions on defective products, as well as rules about the liability for damages or injury caused by things in a person’s care.

 

Particular focus

How the future of work will change?

The fields of application for robotics are many and varied. Some ethical problems emerge in relation to particular areas of application, whilst others are basic and remain common to all. As has been noted, a field which calls for particular attention is undoubtedly the labour market and the personal and societal impact of robotisation. The development of the labour market, and the prospect of increased human redundancy makes it a controversial topic for consideration.The use of robots will cause profound societal change. This will be most obvious in the context of the labour market where conditions are likely to undergo radical change. Robots will be able to extend, even replace work previously done by human persons. This phenomenon has been described as the Fourth Industrial Revolution and it is ongoing, shaping significantly current and future employment patterns (Cfr. COMECE, Shaping the Future of Work, 2018)S.tudies also predict enormous change in job profiles. In order to integrate robots, the work environment requires reorganisation and restructuring which itself generates new jobs which differ from existing employment profiles. An advantage of the use of robots in these new jobs is that it minimizes human exposure to dangerous and inhumane working processes. 
It must also be noted, however, that whilst robots in the work place bring with them opportunities and advantages, they also (often adversely) affect the most vulnerable groups in society, particularly young people and the less well educated. 
Robots can easily perform simple, automated sequences of work which traditionally were carried out by young workers entering the labour market, or by the unskilled. This has the potential to lead to a decrease in job security for these groups and increased polarisation of the labour market. 
The needs of contemporary society call for a renewed commitment to the shaping and regulation of the use of robots in the work place. This requires that legislators be attentive to a number of factors: the safety of the labour market has to be assured, the common good must be respected and the rights of workers need to be protected. 
The existing European legal framework states that work is a human right and that favourable working conditions must be provided. Human dignity, individual freedom and solidarity are foundational to these rights and they give rise to the obligation to shape a human-centered view of work for the future.

 

How social justice and common good become decisive ethical criteria?

Any ethical analysis must be conducted cognisant of both - the individual and collective perspectives. The moral and ethical responsibility to be exercised in the use of robotics relates not only to the primacy of the individual, respect for their dignity and the safeguarding of their free choices, but also wider considerations of social justice. 
Social justice is not solely concerned with the end-goal of the common good, but with issues of equitable distribution and fair access to the world’s resources, and here, robotics has a role. The danger with the growth and development of robotics is that already-existing social differences are being exacerbated, injustices and inequalities are increasing (especially for the most vulnerable) and the attainment of the common good is being frustrated. 
The Christian anthropological vision is one based on solidarity and itself provides a basis for minimising, even overcoming, the negative impacts of robotics, especially for the poor. The idea of the common good is not an abstract one. Rather, it takes concrete form in history in the context-sensitive perception of needs and expectations of free individuals and groups possessed of rights and duties. 
It is therefore necessary to promote and facilitate an open debate on the development of robotics which considers reflectively, and critically its intentions, applications and consequences.6(  see The European Group on Ethics in Science and New Technologies, Statement on Artificial Intelligence, Robotics and Autonomous Systems”, 2018) Such a discussion calls for wide and varied participation which appropriately weighs the differing interests and responsibilities of key actors. The vital contribution of the Christian faith-based perspective to this developing public ethic should not be underestimated.

 

Conclusion

In view of the complex considerations robotics present for humanity, simple answers are not helpful. There can be no unqualified or emphatic acceptance of these new technologies, nor can there be outright rejection of them, with all their possibilities. 
The challenges of scientific and technological development call for a review of the present horizon of principles, a re-examination and re-evaluation of what were previously considered ‘settled’ norms of behaviour and practice. They cause humanity to reconsider its options and priorities in directing individual and social choices, the investment of resources, as well as present and future opportunities. 
The primacy of the human person based on the recognition of the human dignity builds the central part of this review. A balanced respect for technological developments and the clear vision for the commitment of the human responsibility to the common good is essential. 
It is necessary to be attentive to this developing field of research and innovation and to accompany its actors and processes in a critically reflective, constructive way which seeks to cultivate a public ethic and promote the common good. 
This necessitates more than a crude, utilitarian cost-benefit analysis of new technologies in their social, environmental and economic dimensions. It is essential to encourages the development of a humanistic culture which discerns the connections between science and technology and the anthropological, cultural and ethical aspects. 7(The term of a " humanistic culture" has to be shaped by main principles like: rule of law, social justice, solidarity, accountability and transparency.) Only this multi- disciplinary consideration of robotics can help harness the potential of such scientific and technological innovations in ways which respect human dignity and promote the common good.