Article Review
A trustworthy AI for a human governance
Giovanni Greco, Accademia Alfonsiana, Università Lateranense, Roma
Luciano Floridi, Etica dell’intelligenza Artificiale. Sviluppi, opportunità, sfide, Milano: Raffaello Cortina, 2022 pp. 384; ISBN: 978-8832854091, € 24,70 (Ppbk.)
Luciano Floridi (Ed.), Ethics, Governance, and Policies in Artificial Intel-ligence, Cham: Springer, 2021, pp. 406; ISBN: 978-3030819064; $ 159,99 (Hdbk.)
Beena Ammanath, Trustworthy AI. A business guide for navigating trust and ethics in AI, Hoboken, New Jersey: Willey, pp. 224; ISBN: 978-1-119-86792-0; $ 49,95 (Hdbk.)
In this article review I consider three recent publications about AI and ethical questions. First of all, I will analyse the book by Luciano Floridi, “Etica dell’intelligenza Artificiale. Sviluppi, opportunità, sfide” (series of “Scienza e fede”, edition 2022), which deals with the epochal transformation due to the introduction of AI in all areas of world today: education, commerce, industry, travel, entertainment, health, politics, social relations.
Some chapters from the next publication are derived from the Volume 144 of “Philosophical Studies Series”, edited by the same Luciano Floridi (Springer, 2021). The Series aims to provide a forum for the best current research in contemporary philosophy broadly conceived, with its methodologies and applications. The publication is a collection of 22 articles, most of them are written by the editor, Floridi; the title “Ethics, Governance, and Policies in Artificial Intelligence” suggests that AI can play an important role in all fields, but we need increasingly smarter ways of processing immense quantities of data, sustainably, efficiently, and trustily. AI must be treated as a normal technology; it is neither a miracle nor a plague, it is “one of the many solutions that human ingenuity has managed to devise” (Preface). The ethical debate is and will always remain a human question, a duty and a very crucial point. The volume collects some of the most significant outcomes of the research on the ethics of AI conducted by members of the Digital Ethics Lab (DELab), at the University of Oxford. The hope of this research work is to help establish a robust foundation for further studies.
Then I consider also the book “Trustworthy AI. A business guide for navigating trust and ethics in AI”, written by Beena Ammanath. The author, Ammanath, is a global thought leader in AI ethics, an award-winning senior technology executive, and is extensively experienced in AI and digital transformation across a variety of industries.
The life of modern man is becoming intrinsically linked to technologies, services and especially digital products. All this transformation, due to its pervasiveness, brings with it many doubts, concerns and prejudices (or biases), but it is a revolution in its first stages of development therefore we can and must guide it ethically, for true human development. The contemporary reader is asked to understand what we are talking about, to understand AI, its nature, its challenges. Luciano Floridi is one of the most authoritative voices of contemporary philosophy, full professor of Philosophy and Information Ethics at the University of Oxford and of Sociology of Culture and Communication at the Alma Mater Studiorum University of Bologna. He tackles the issues on modern technologies from a philosophical perspective, offering a necessary contribution of ideas to the great collective effort that is ever necessary. The first edition of his book “Etica dell’intelligenza Artificiale. Sviluppi, opportunità, sfide” was preceded by other notable texts: “The Fourth Revolution” (2017, winner of the Walter J. Ong Award for Career Achievement in Scholarship), “Thinking about the infosphere” (2020, winner of the Udine Philosophy Award) and “The green and the blue” (2020).
Etica (384 pages) has a very simple structure: a Preface, 14 chapters, Acknowledgments and Bibliography. The chapters are collected into two sections: the first, with the first 3 chapters, explains what AI is, and the second aims to give an assessment of new technologies. Floridi, giving the first indications on the purpose of the book, emphasizes that modern man is called to give shape to new technologies because then they will shape the man of the future (p. 12). Man must therefore understand and responsibly guide technological development: the subject of Floridi’s book is the development of the ethics of AI. In the first three chapters, Floridi offers an interpretation of the past, present and future of AI, a real philosophical introduction in which the main thesis is to see the development of AI as derived from the unprecedented divorce between intelligence and the ability to act. From these premises, the second part offers a theoretical analysis of the consequences that the modern man can evaluate.
Already in the Preface it is possible to appreciate the programmatic approach to the issue, which the author addresses and specifies with extreme clarity. This comes from a twenty-year experience of studies, works and insights, declared at the end of the book, in many pages of Acknowledgments. All the chapters and the various parts are closely related, with internal references, sometimes perhaps excessive, but which indicate the great organic nature of the work; the author himself indicates the possibility of reading the work by placing the chapters in a different order from the one proposed (Floridi proposes some possible variations).
The author states that the book is written in “terms of philosophical roots […] from a post-analytic-continental perspective […] in the tradition that links pragmatism, particularly Charles Sanders Peirce, with the philosophy of technology, especially Herbert Simon” (p. 15). This “conceptual design” aims to understand the world and improve it, it looks at what is morally good and right, as the core of philosophical reflection. To access Floridi’s great project it is necessary to have a certain “university-level knowledge of philosophy, a little patience and time, but also an open mind” (p. 16). Floridi argues against a certain skepticism about the possibility of having such a broad perspective on AI issues. Floridi sees himself in this as a perfect “explorer rather than a colonizer” (p. 17). The author proposes a book neither to investigate the limits of technology nor to consider technology as the solution to every human problem. The work seeks the roots of some of the digital problems of our time by tackling “a new form of action, its nature, its scope and its challenges and how to exploit it for the benefit of humanity and the environment” (p. 18).
In the first chapter, Floridi tackles the origins of AI from a conceptual point of view, analyzing the main transformations that it generated. The author puts forward the hypothesis that AI systems seem more like a new form of action than a new form of intelligence. Digital has the ability to generate divisions, such as, for example, between personal identity and subject, between localization and presence (in the realities of social media). The digital, therefore, “cuts and pastes our realities both ontologically and epistemologically” (p. 26). The digital is seen as not as a reality that strengthens but that radically transforms all of reality, which creates new environments to inhabit and new forms of action. The author dares to speak of “re-ontologization to refer to a radical form of re-engineering” (p. 31), due to the overwhelming ability to transform the intrinsic nature of reality, its ontology. Consequently, he defines the whole modern mentality and brings with him consequences to be carefully considered.
In the first chapter, the author considers AI as a “increasing resource of the ability to act” (p. 53), starting from two assumptions: the divorce between the ability to solve problems and perform tasks from the need to be intelligent; the transformation of the human environment into an environment suitable for AI, which makes such divorce possible and effective (this is called by the author “wrapping”).
In the second chapter, some developments of these factors are analyzed, always from a conceptual point of view. “The proposal for the summer research project on Artificial Intelligence in Dartmouth” and some questions posed by Turing are examined. To date, there is still no univocal definition of AI, quite simply because a clear and shared definition of (human) intelligence is not known. Floridi tries to investigate the different attempts to define elements such as machine, brain, computer, evaluating the different approaches to AI, engineering and cognitive. The author comes to great claims, such as, for example, the one according to which “current machines have the intelligence of a toaster” (p. 52), since they do not aim to reproduce intelligence but to do without it! In a world “wrapped” by the digital, the basic assumption is that also the artificial elements can be agents, so it is good to ensure that the human being never become a means of actions in the digital world (the infosphere), according to Kantian imperative. Man must be a conscious guide in his “constant collaboration” with the superefficient AI partner.
In the third chapter, Floridi evaluates the positive and negative signs present in society today, in front of fears and expectations about AI, also considered by government bodies (see the 2021 Proposal for a Regulation of the European Parliament and Council, which establishes harmonized rules on AI). Then with great expertise a very interesting analysis of the latest AI developments over the last 20 years is placed (passing from DeepMind, AlphaZero, the AI systems used in some applications in medicine). The author tries to explain Machine Learning and the issues relating to the quality and quantity of data, the training, the historicity, the authenticity of the data and the development of synthetic data. He addresses the issues of the real possibility of describing all of reality with data (he expresses his doubts about it). According to Floridi, the future of mankind will not be populated with humanoid robots but rather with machines that perform tasks in their own way (such as toasters and washing machines).
In the fourth chapter Floridi provides a unified framework of the ethical principles for AI, obtained from six of the most important documents published from 2017 to today (such as the Asilomar Principles, from the Future of Life Institute; the Montreal Declaration for AI responsible, from the University of Montreal; or the Declaration on Artificial Intelligence, Robotics and Autonomous Systems, from the European Group on Ethics in Science and New Technologies, from the European Commission, of March 2018). A framework of 5 values is presented: charity (promotion of well-being, protection of dignity and support for the planet), non-maleficence (protection of privacy, security), autonomy (balance between artificial capacity and human capacity), justice (promotion of prosperity, of solidarity, limitation of iniquity) and, lastly, the value of explicability (understood as a principle that includes the epistemological sense of intelligibility and the ethical sense of accountability).
The fifth chapter, presents some of the main risks of unethical behavior (shopping, bluewashing, lobbying, dumping, ethical avoidance). It is necessary to create awareness of the nature and insolvency of risks and to strengthen a preventive approach. A conscious governance of AI is necessary to have a human project for the digital age, to give a socio-political direction to the development of the technological world (chapter six). The author underlines, with a very pragmatic vision, that compliance with existing rules (what hard ethics imposes on us) is not enough to have really ethical developments, it is also necessary to have a self-regulation (soft ethics) that looks to a “good” development. Floridi proposes the idea of having ethics as a strategy for the future, guiding the evaluation of the impact of digital, in the circularity of the discussion-planning-shaping of the future.
In chapter seven there is a discussion of the ethics of algorithms, considering epistemic issues and regulatory issues; finally, various issues are analyzed, about (unwanted) bias, discrimination, traceability. The author’s great expertise sets many examples of significant events in this field.
Chapters eight and nine develop, respectively, the ethically negative and positive uses of AI; the necessary balance between the advantages of innovation and the risk of potential damage or malfunctions must always be evaluated with accuracy. Floridi addresses the “realistic and plausible concerns surrounding AI crimes” (p. 180), developing the concepts of emergency plans and actions, accountability, monitoring, and (psychological) influence. He tries to identify the main threats (financial, psycho-physical, crimes against the person, or theft or fraud) and possible solutions. The author highlights in his ethical proposal a use of AI for the social good and identifies seven essential ethical factors for future ethical initiatives related to AI.
The tenth chapter opens by addressing the ancestral fear of the unknown, which with AI becomes fear of the new technology, the next big enemy of man. Floridi reiterates his idea that new technologies do not really have intelligence in themselves, they are just computers that do what they are told to do, so it unmasks and cancels the dogmas of a baseless belief. According to the author, the diatribes between the adepts of singularity (reaching the point of no return, in which machines will dominate man) and those who have no faith in AI are useless, if not downright harmful, because they waste energy and don’t really address the subject.
In the eleventh chapter, Floridi offers some recommendations to modern man, which can be “constructive and concrete, to evaluate, develop, encourage and support a good AI” (p. 279). The guide of all this must involve political subjects, civil society, individuals and various organizations, all consciously committed to human self-realization, to improving human action and promoting responsibility.
In the twelfth chapter, Floridi briefly analyzes the environmental impact of AI, in its positive and negative aspects, with the aim of providing policy recommendations to undertake a greener and climate-friendly development, in line with the values proposed by the European Union. The thirteenth chapter identifies the limits related to the lack of regulatory analysis and the lack of empirical data and encourages the use of the objectives proposed by the United Nations.
In his conclusions, Floridi tries to see beyond the “divorce” operated within AI between acting and being intelligent and proposes the passage from the ethics of artificial action to the politics of social actions, for a reliable science and a robust technology that respects the man and the environment. Floridi wrote a philosophical work in which he provides modern man with a conceptual design to bridge the man/machine gap, to put man back at the center of the infosphere and his world. Man as a “beautiful error of nature” (p. 336) must recognize and preserve within himself the core values of his own existence.
The volume “Ethics, Governance, and Policies in Artificial Intelligence” considers the wide range of initiatives launched by many organizations to establish ethical principles for the adoption of socially beneficial AI. The sheer volume of proposed principles threatens to become overwhelming and confusing, generating two potential problems: the unnecessary repetition and redundancy, with confusion and ambiguity, and the creation of a ‘market for principles’ where stakeholders may be tempted to ‘shop’ for the most appealing ones. Floridi tries to analyse several of the highest-profile sets of ethical principles for AI and proposes a comparative analysis of different documents. Overall, he finds a degree of coherence and overlap with a scheme of six principles (beneficence, non-maleficence, autonomy, justice, explicability, accountability) that is impressive and reassuring (the synoptic view of the documents is the same proposed in the fourth chapter of “Etica dell’Intelligenza Artificiale”).
The article “An Ethical Framework for a Good AI Society” reports the findings of AI4People Scientific Committee, in a year-long initiative designed to lay the foundations for a “Good AI Society”. It presents a synthesis of the five ethical principles that should undergird its development and adoption and offer twenty concrete recommendations which may be undertaken directly by national or supranational policy makers, or by other stakeholders. The article states the core opportunities for promoting human dignity and human flourishing offered by AI and presents a brief, high-level view of the advantages for organisations of taking an ethical approach to the development and use of AI. First of all, ethics’ advantages can only function in an environment of public trust and clear responsibilities. Responsibility is essential for preserving human self-determination: all stakeholders co-design and co-own the solutions and cooperate to bring them about.
In the article “Establishing the Rules for Building Trustworthy AI” Floridi argues that the European Commission’s report, “Ethics guidelines for trustworthy AI” provides a clear benchmark to evaluate the responsible development of AI systems. What processes and decisions are going to be delegated to AI systems, what kinds of effects the trade-offs between human and artificial agency are going to have, and what forms of assessment, control, revision and redressing must be put in place, are crucial questions. An independent High-Level Expert Group created by the European Commission, consisting of 52 experts, published in 2019 guidelines for a responsible development of AI, underling that compliance with the law is necessary but not sufficient: a post-compliance ‘soft ethics’ approach is necessary.
The article “The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation” focuses on the socio-political background and policy debates that are shaping China’s AI strategy and analyses the main strategic areas in which China is investing in AI and the concurrent ethical debates. The programme “Healthy China 2030”, edited in July 2017, explicitly stresses the importance of technology in achieving China’s healthcare reform strategy, and emphasises a switch from treatment to prevention, with AI development as a means to achieve the goal. President Xi has been promoting “digital environmental protection” with AI systems, forwarding the idea of a “minimum moral standard” within society. Administration of justice is another area where the Chinese government has been advancing using AI.
The article “Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical” presents an analysis of five unethical risks in translating ethical principles for digital technologies into ethical practices: ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping and ethics shirking. The recent public reaction has generated a flourishing of initiatives to establish what principles, guidelines, codes, or frameworks can ethically guide digital innovation, particularly in AI, to benefit humanity and the whole environment: this is a positive development that shows awareness of the importance of the topic and interest in tackling it systematically.
“How AI Can Be a Force for Good – An Ethical Framework to Harness the Potential of AI While Keeping Humans in Control”. The article indicates how to harness the potential for good of AI while mitigating its ethical challenges. The analysis focuses first on uses of AI that may lead to undue discrimination, lack of explainability, the responsibility gap and the nudging potential of AI and its negative impact on human self-determination, due to the invisibility and influencing power of AI. Robust procedures for human oversight are needed to minimize unintended consequences and redress any unfair impacts of AI. Trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace.
The article “The Ethics of Algorithms: Key Problems and Solutions” identifies the ethical problems that algorithms give rise to and the solutions that have been proposed in recent relevant literature. A conceptual map and a meta-analysis of the current debate on the ethics of algorithms, linked with the types of ethical concerns previously identified, are proposed. The study analyses the inconclusive evidence leading to unjustified actions or to opacity, the misguided evidence leading to unwanted bias, the unfair outcomes leading to discrimination, the transformative effects leading to challenges for autonomy and informational privacy and the traceability leading to moral responsibility.
“How to Design AI for Social Good: Seven Essential Factors”. The article addresses the gap existing between limited understanding of what makes AI socially good in theory and how to reproduce its initial successes in terms of policies, and identifies the necessary ethical factors. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. Factors such as falsifiability and incremental deployment, safeguards against the manipulation of predictors, receiver-contextualised intervention, receiver-contextualised explanation and transparent purposes, privacy protection and data subject consent, situational fairness, human-friendly semanticisation are analysed.
“From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices” aims to contribute to closing the gap between principles and practices, with the focus on Machine Learning (ML). The article outlines the research method, the initial findings and provides a summary of future research needs. The aim of this research project is to identify the methods and tools already available to help developers, engineers, and designers of ML reflect on and apply an ethical AI typology and the methodology for its creation. In this way, they may know not only what to do or not to do, but also how to do it, or avoid doing it.
In the eleventh article, “The Explanation Game: A Formal Framework for Interpretable Machine Learning” a formal framework for interpretable ML is proposed. ML has been used in some countries to help evaluate loan applications and student admissions, predict criminal recidivism, and identify military targets, to name just a few controversial examples. Combining elements from statistical learning, causal interventionism, and decision theory, an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction was realized. The model clarifies the trade-offs inherent in any ML solution, and characterises the conditions under which agents are almost surely guaranteed to converge on an optimal set of explanations. The game provides a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
In “Artificial Agents and Their Moral Nature,” artificial agents are investigated as moral agents, entities that can be involved in moral situations, even if they do not necessarily exhibit free will, mental states or responsibility. The author, Floridi, focuses directly on “mindless morality” in order to tackle some vital issues in contexts where artificial agents are increasingly part of the everyday environment. According to him, the new concept of moral agent can be used to argue that artificial agents can be fully accountable sources of moral action. The author writes about “mindless morality”, decoupling responsibility and accountability, terms that must be clarified. Floridi proposes to notice that thresholds of morality are in general only partially quantifiable and usually determined by various forms of consensus. From the author’s point of view, artificial agents are morally accountable as sources of good and evil at the “cost” of expanding the definition of morally-charged agent.
“Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions”. One unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies for criminal acts. These unethical uses of AI are still relatively young and inherently interdisciplinary, with little certainty for the future. The implications are particularly difficult to investigate because the complexity of AI provides a great incentive for human agents to avoid finding out what the system is precisely doing. Floridi proposes that to ensure the effectiveness of the criminal law the burden of liabilities must be shifted onto the humans, such as engineers, users, vendors, and so forth, who have acted decisively. Whether an agent is indeed a moral agent hinges on whether the agent can undertake actions that are morally qualifiable.
“Regulate Artificial Intelligence to Avert Cyber Arms Race”. Cyber attacks continue to escalate in terms of frequency, impact, and level of refinement. The situation is the same for the efforts of state actors to acquire new offensive capabilities to defend, counter or retaliate incoming attacks. AI has become a key technology both for attacking and defending in cyberspace but there is still a problematic vacuum in the current regulations. The article argues for the need to define regulation for state use of AI for defence purposes and calls on regional forums, such as NATO and the European Union, to revive efforts and prepare the ground for an initiative led by the United Nations.
“Trusting Artificial Intelligence in Cybersecurity Is a Double-Edged Sword”. AI applications for cybersecurity tasks are increasing in the private and public sectors, estimates indicate that their market will grow from US$1 billion in 2016 to a US $34.8 billion net worth by 2025. In all the world, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging, but this can also facilitate new forms of attacks against the AI applications themselves. To reduce security risks, it is necessary to ensure the deployment of "reliable AI" for cybersecurity, AI must guarantee robustness, response, and resilience. Only in this way is it possible to gain control, against data poisoning, tempering of categorization models, and backdoors.
“Prayer-Bots and Religious Worship on Twitter: A Call for a Wider Research Agenda”. The automation of online social life is urgent for researchers and for the public; one of the most significant uses of such technologies is for religious purpose. Islamic Prayer Apps, for example, automatically post prayers from its users’ accounts: one such service is already responsible for millions of tweets daily. This article provides the first large-scale analysis of the religious use of online automation technologies, focuses on a particularly wide-spread phenomenon, the Islamic Prayer Apps. The spread and social significance of these applications calls for a broadening of the scope of current research on online automation and on social media bots.
“Artificial Intelligence, Deepfakes and a Future of Ectypes”. The chapter introduces the concept of “digital ectype”; an ectype is the archetype, a copy that has a special relation with its source. Floridi argues that digital technologies are able to separate the archetypal source from the process that leads to the artefact and then, one can have digital ectypes that are “authentic” in style and content. They are not the “original”, not “authentic” in terms of production, performance, or method (they are not the ones used by the source to deliver the artefact). So, the digital ectypes can be authentic but unoriginal artefacts, or can be inauthentic but original artefacts. Digital technologies seem to undermine our confidence in the original, genuine, authentic nature of what we see and hear, but what the digital breaks it can also repair, so appropriate digital strategies must also be developed.
The article “The Ethics of AI in Health Care: A Mapping Review” summarises current debates in healthcare and identifies open questions for future research. The ethical issues arise at different levels of abstraction (individual, interpersonal, group, institutional, and societal or sectoral) and it is possible enunciate some considerations for policymakers and regulators, categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. If action is not swiftly taken in this regard, a new ‘AI winter’ could occur due to terrifying effects related to a loss of public trust in the benefits of AI for health care. AI solutions embedded in clinical practice need a clear governance framework to protect people from harm, including harm arising from unethical conduct.
“Autonomous Vehicles: From Whether and When to Where and How”. Understanding attitudes towards the benefits and shortcomings of autonomous vehicles means being able to address societal welfare and individual wellbeing more successfully. Maybe in the near future we may enjoy trips rather than journeys, with more freedom to choose and to move: more people will be able to do so in the future, also those who cannot drive today. Digital technologies are changing the very essence of mobility; completely autonomous vehicles will be possible. We only need to develop the right kind of technology.
“Innovating with Confidence: Embedding AI Governance and Fairness in a Financial Services Risk Management Framework”. An increasing number of financial services companies are adopting solutions driven by AI to gain operational efficiencies, derive strategic insights, and improve customer engagement. But this is generating apprehension for the complexity and self-learning capability of the systems. Using the risk of unfairness as an example, the paper introduces the overarching governance strategy and control framework to address the practical challenges in mitigating the risks that AI introduces. All this is necessary to innovate with confidence.
“Robots, Jobs, Taxes, and Responsibilities”. On the 16th of February 2017, the plenary session of the European Parliament voted in favour of a resolution to create a new ethical-legal framework according to which robots may qualify as “electronic persons”. We are headed towards a future where humans and machines will work together (not against one another), but there are, of course, risks and challenges to be faced, such as, to ensure that individual and social benefits are maximised, to agree on an ethical and empathic framework with the AI design. For example, more education and a universal basic income may be needed to mitigate the impact of robotics on the labour market. We will need to consider the allocation of responsibilities in case of damages involving AI systems. The possibility of a status of “responsible electronic persons" may “deresponsabilise” those who should control them. These and other matters are investigated.
“What the Near Future of Artificial Intelligence Could Be”. In this chapter, Floridi evaluates the possible developments of AI in the near future and identifies the likely trends of a shift from historical to synthetic data and of a translation of difficult tasks (in terms of abilities) into complex ones (in terms of computation). The future of successful AI may lie into the capability of transforming the environment within which AI operates into an AI-friendly environment: this is possible by digitalizing the real world, with the goal of resolving all the problems. This is the strategy for the AI future. The foreseeable future of AI will depend on our ability to negotiate the resulting (and serious) ethical, legal, and social issues.
The third book I consider, “Trustworthy AI. A business guide for navigating trust and ethics in AI,” is described as ground-breaking for its pragmatic and direct approach to ethics and trust in AI. The author skilfully combines the exploration of the dimensions of trust with best practices to help leaders develop and use AI ethically and delivers a practical guidebook on AI ethics for executives, technologists, ethicists, and users.
The book is a useful guide to structure a strategy for AI ethics within an organization. It addresses the possibility of making AI ethics operational and driving ethical AI planning across business units and departments. The topic is relevant for enterprises that are adopting and deploying AI systems in huge numbers, for the value, the promise and the perils of AI are often quite great.
This book cuts through philosophical and theoretical discussions to provide clarity on “Trustworthy AI”, its developments, and how to implement ethical principles within modern applications. The author advances a way to make AI ethics operational, to make AI solution fair, transparent, secure, reliable, safe, compliant, accountable, explainable, and responsible. The work is an exploration of the pitfalls and priorities in ethical AI, enriched by the latest research, best practices, and real-world examples.
An Index, Foreword, Preface as well as Acknowledgments, in which the author thanks many colleagues for decades of professional experience, for their desire to create and use AI for the global benefit, for a trustworthy future, introduce the content of the book.
Trustworthy AI, structured as a practical handbook, has an introduction, 12 chapters and, in the final pages, the notes and an index of names and themes. Each chapter is preceded by a significant quote from a founder of AI or a scientist (such as Alan Turing, John McCarthy…) and ends with a funny and significant cartoon. The topics are in-depth investigations of the key characteristics of trustworthy AI: fair and impartial, robust and reliable, transparent, explainable, secure, safe, in compliance with privacy, accountable, responsible. The author prefers to introduce each topic with a real case explaining the concept, before giving a definition or theoretical explanation. A box in each chapter gathers the most important questions to make the topics, research, development, and use of AI systems practical.
Initially, Ammanath presents the importance of trustworthy in every human system and points out the absence of a corpus of literature and scholarship that defines ethics and trust in AI to a granular degree. Everyone in a company that produces AI systems is a responsible and active participant in shaping the AI era; the effort and the attention required can seem daunting but a growing consensus in the AI community is driving towards this challenge, to act now, at the beginning of modern AI. We have the crucial opportunity to seize the moment and work for the sociotechnical system with a “Trustworthy AI”. In all the chapters, Ammanath tries to shape how people, processes, and technologies are harmonized to yield cognitive tools we can really trust.
In her first chapter, Ammanath provides a basic presentation of AI, affirming that AI systems don’t think but provide a description of something in the real world: AI is not a real intellect, is not a single reality, but is many things, it is an umbrella term for a variety of models, cases, and supporting technologies. Ammanath’s efforts are aimed at stimulating serious works of exploring how to make this technology something “trustworthy.”
In the second chapter, the topics of fair and impartial AI are developed. There are many examples of how unfair, biased systems have led to harm and public backlash: many organizations deploying AI suffered reputation damage, legal implications and loss of consumer trust and government regulations and penalties for violations are increasing. The new discipline of algorithmic fairness is focused on exploring how to remove bias and promote equity in the use of data, analytics, and AI. The chapter presents the possible origins and the selection of bias, with the possibility that unethical scientists might manipulate, obscure or ignore some information to make their theories appear correct or to yield a model that meets the desired output.
The third chapter concerns robust and reliable AI: a system needs the ability to maintain its level of performance under any circumstances, but all AI models are specific, brittle, with their function and their application. AI reliability depends on applicability, completeness and accuracy of the systems, all features often not static; the data science team must monitor and mitigate these potential problems. The leading practices must consider the performance of data audits, monitoring reliability over time, the uncertainty estimates, the managing drift, the continuous learning, the ongoing testing, and the exploration of alternative approaches.
The fourth chapter is about transparency, which is at the core of many challenges with achieving trustworthy AI; it is a cross-cutting feature that impacts all other aspects of ethical AI and permits accountability, motivates explainability, reveals bias, and encourages fairness. Internal transparency encourages collaboration between teams, sparking innovation while identifying inefficiencies and problems; it promotes equal treatment among employees and, in turn, can attract more and better talent. Transparency makes informed choices possible for every professional figure. The chapter investigates the limits of transparency, the limits related to intellectual property or the defense systems against crimes.
The fifth chapter concerns explainability, defined as “a cousin to interpretability” (p. 77); it is relative to the possibility to understand how and why an output is reached. This grants greater internal visibility. The concepts of interpretability, “right to an explanation”, intellectual property, privacy and security are investigated. It is asserted that it is up to the organization and data scientists to determine the impact on the end users and the level of explainability needed.
The sixth and seventh chapters are about security. Different types of systems attempt to penetrate, compromise, or corrupt a system, so it is essential to learn how to remedy vulnerabilities against three type of corruption: by causing the system to take an incorrect action or make an incorrect decision, by causing the system to reveal data, insights, or conclusions that it should not, and by causing the system to learn incorrectly. The author examines the different types of attack to the security of an AI system and the consequences. The author also investigates the meaning of the term security, starting with philosophical notions. There are different tests to evaluate the alignment of the AI objectives with the human values, but, above all, safety is a suite of ongoing activities and processes that touch every part of the AI lifecycle and all of the stakeholders in it. The author, presenting the technical safety leading practices, explicates the terms specification, robustness, assurance.
The eighth chapter concerns privacy. Strategies of (pseudo-)anonymization can be insufficient, in particular after that the data are used as training sets of an AI system. The author examines the privacy laws and regulations but states also that the challenge for businesses going forward is to meet simultaneously existing laws while monitoring the development of new legislation and regulation. Assuring high standards of privacy is an ethical obligation but is also a guarantee of business interests: if consumers do not trust how a company treats their data, they do not trust the company. Trustworthy AI is fundamental for consumer confidence in the enterprise.
The ninth chapter is about accountability: people and organizations are accountable for their actions; this is a component of social trust between citizens and a necessary component of professional activities in business and government. Accountability is a uniquely human ethical priority, artificial systems cannot think freely like humans or make an apology, cannot explain their decisions or understand that they are accountable for those decisions. “This is the necessary basis for human culpability in AI decisions” (p. 149). But actually, there is only “a fuzzy consensus of who is accountable for what and to whom: enterprises deploying AI are left to define what accountability means in the context of their policies and stakeholders” (p. 150). Legal responsibility is a significant component of the social and technical systems around AI.
The tenth chapter deals with responsibility. The range of business models, laws and regulations, customs and expectations in the word is enormous. So, it is impossible to create a granular list of what is necessary for a responsible AI system: each enterprise should decide for itself about the systems to deploy and use. Data scientists have to motivate responsible AI use, describing as much as possible the expected results, compared with the objectives and the business strategies. The guidelines for these questions are still being developed but the public is becoming more aware of AI and business policies and industry standards are developing towards a competitive advantage in the marketplace.
The eleventh chapter gives some more practical recommendation, with two main steps: identifying the relevant dimension of truth and cultivating trust through people, processes, and technologies. For every organization, the duty is to orient the entire AI lifecycle into operational scenarios with relevant dimension of trust. All the components of the organizations have to be involved in the training and the education for the good development of AI. Our societies are producing a great number of guidelines to assure powerful, valuable, worthy of trust AI.
In the last chapter, the author underlines that the actual challenge is that there is not one way to engender trust in AI, they are numerous ways, and that the proposed frameworks offer a roadmap among all the problems and opportunities. The great chance of a similar work will emerge in the years to come, the businesses that work toward building and using trusted, ethical tools will be prepared for all the great expectations.
In “Trustworthy AI,” Beena Ammanath offers a practical approach for enterprise leaders to manage business risks in a world where AI is everywhere by understanding the qualities of trustworthy AI and the essential considerations for its ethical use within the organization and in the marketplace. She offers a close look at the potential pitfalls, challenges, and stakeholder concerns that impact trust in AI development. The book is basically for readers who know the practice of AI to an intermediate extent and not for a layman, as the author asserted.
The publications here considered inform all interested readers on the new ground as an essential resource for all organizations using AI, to bring AI to its greatest potential. We need to go beyond concern, taking advantage of the enormous opportunity, shaping the future.
There are several thorny issues and unanswered questions that need to be addressed in our pursuit of AI value, some purely technical, many dealing with uniquely human values, expectations, and desires. At their root, there is the trust. There is not one way to engender trust in AI, there are no ironclad rules. But there are valuable guidelines to adopt a roadmap for interrogating AI projects, identifying the relevant qualities of trust, and amending and im-proving the entire AI lifecycle. All this is for a good AI governance.
Floridi dares to write about new ontology for all the reality pervaded by AI, but it is probably more reasonable to talk about new epistemology, a profound discovery of reality, an increase in human power and control over reality. The great merit of Floridi is to always continue to seek an ethical understanding of the pervasive phenomenon of AI, he goes so far as to consider the possibility of a “mindless morality”, or a morality of all the agents, not only human. Floridi knows very well the risk of losing the centrality of man in this purely human field, which is unacceptable; in fact, he continues his search for a full governance of new technologies and for an AI that is consciously guided by man, at his full service.
Reprinted from Reviews in Science, Religion and Theology, 1(2) June 2022, pp. 6-21.