Introduction to the Readings
ChatGPT was launched on November 30, 2022. Within five days, the artificial intelligence (AI) tool had one million users [Forbes], and within two months that number had grown to 100 million, making it the fastest-growing consumer software application in history [The Guardian]. Since then, impressive new applications of AI have multiplied. Recent releases include OpenAI’s Sora – which generates videos from text prompts – and Google’s NotebookLM – which, among other things, generates podcast episodes from PDFs. Many of these tools were hard to image just five years ago.
Thankfully, the breakneck pace of AI development has not been without attempts to develop an ethical response. In March of 2023, some of the creators of The Social Dilemma published a video called The A.I. Dilemma about the need for ethical reflection to keep pace with AI development. In the same month, Elon Musk signed an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” [FutureOfLife Institute]. In April of 2023, complaints from the Italian government led to a temporary shutdown of ChatGPT in that country [BBC].
This special issue features articles and resolutions that examine and attempt to mitigate ethical issues arising from developments in AI. The deepest such issues arise precisely from the nature of AI as an imitation of human intelligence. As such, they offer excellent motivation to revisit key elements of human anthropology.
The recent publication of the note on the relationships between artificial intelligence and human intelligence “Antiqua et nova” – by the Vatican Dicasteries for the Doctrine of the Faith, and for Culture and Education – further emphasizes the need for attentive reflections on both the potentialities and implications AI mounting developments may have for humanity’s future.
Perspectives from Technology, Business, and Government
Asilomar AI Principles
Many perspectives on AI ethics from before the release of ChatGPT have become obsolete. Not so, however, for the 2017 Asilomar AI Principles – the first document in this Special Issue. The relevance of the Principles is perhaps unsurprising, given that they were signed by technology giants such as Elon Musk and Sam Altman, as well as theorists such as Yan LeCun, a pioneer of deep learning [Forbes].
Importantly, the Asilomar AI Principles begin by establishing AI research as a goal-directed pursuit: “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.” As such, funding for AI research should be accompanied by investments in AI ethics. This requires – according to the document – efforts to address “thorny questions” such as how we to “grow our prosperity through automation while maintaining people’s resources and purpose.” The publication then lists thirteen principles of AI ethics and values, and it raises concerns about several long-term issues. The Principles caution that “advanced AI could represent a profound change in the history of life on Earth” and warn that “there being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.”
“Managing the Risks of Generative AI,” Harvard Business Review
Probably, the kind of AI that the Asolimar Principles have in mind is generative AI. This focus is made explicit in the second resource in this Special Issue: a Harvard Business Review article by Kathy Baxter and Yoav Schlesinger.
Generative AI produces content such as texts, audios, and videos by reproducing patterns found in training data. Baxter and Schlesinger set out guidelines for organizations that intend to integrate such generative AI tools. They include concerns about training data, such as using only data that is up-to-date and sourced directly from customers. The guidelines also argue for the inclusion of humans in important decision-making processes, the need for human testing of AI tools, and the importance of collecting feedback from stakeholders on the use of AI.
U.N. Resolution on AI and UNESCO Recommendation on the Ethics of Artificial Intelligence
Developments in AI, of course, have also prompted governmental responses. One example at the international level is the United Nations resolution on “safe, secure and trustworthy AI”. According to the resolution, these types of systems are “human-centric, reliable, explainable, ethical, inclusive, in full respect, promotion and protection of human rights and international law, privacy preserving, sustainable development oriented, and responsible.” The resolution urges member states to develop regulatory and governance approaches, identification and testing standards, public feedback mechanisms, awareness programs, and other initiatives that make AI safe, secure, and trustworthy.
Further, the U.N. has an interest in promoting AI that is consistent with international law, with the Universal Declaration of Human Rights, and with the 2030 Agenda for Sustainable Development. Towards this end, the resolution urges member states set standards such that AI will promote, rather than hinder, sustainable economic, social, and environmental development. On the positive side, data sharing, technology transfer, and infrastructure development can be used to confront the “digital divide” between advanced and developing countries. On the negative side, states should cease the use of AI that poses “undue risks to the enjoyment of human rights, especially of those who are in vulnerable situations.”
The UNESCO Recommendation on the Ethics of AI has a similar focus on international law, human rights, and inclusive development, and considers that AI has the potential to both encourage and threaten these values. The document underlines the importance of responsibility. “Member States should ensure that it is always possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities.” This goal is complicated by the lack of transparency and explainability of many contemporary AI systems. These values should be taken into account alongside privacy, safety, and security.
Take-Aways from these Documents
The three above-mentioned documents represent a needed effort by key AI stakeholders – technologists, businesses, and government – to confront ethical issues. At the same time, several of the issues raised apply to many technologies, not just to AI. For instance, the documents argue that AI should be consistent with international law, uphold recognized human rights, and foster sustainable and inclusive development. It should also be safe, sustainable, and oriented towards shared prosperity. Arguably, these principles are also relevant for advances in the Internet or social networks, for example.
Other issues raised by the documents are AI-specific. We can break these into three areas: the ethical orientation of AI, values or principles to orient AI development, and mechanisms to implement these principles. They include the following items. (A denotes an idea noted in the Asolimar Principles; H denotes an idea noted in the Harvard Business Review article, and U1 and U2 denote ideas noted in the UN and UNESCO Resolutions.)
Ethical orientation of AI development:
It is important to recognize that:
- AI could change the history of life on Earth [A];
- The goal of AI development is not undirected, but beneficial intelligence [A].
Desired principles for AI systems:
AI systems should be:
- Transparent [A] and explainable [A, U1], and allow attribution and responsibility [U1, U2];
- Human-centric [U1], privacy-preserving [A, U1], and supportive of personal liberty [A].
Mechanisms to direct AI development towards these principles:
Technology companies, businesses, and governments should:
- Promote awareness of [U1], and invest in [A], AI ethics, and be willing to address “thorny” issues [A]; they should set domestic and international standards [U1], and develop governance and regulatory approaches [U1];
- Implement human testing [H], include humans in decision-making processes [H], and collect feedback from stakeholders [H, U1].
Why are these ideas AI-specific? Consider the first set of principles. The documents argue that AI systems should be transparent and explainable and should make attributable and responsible decisions. Today’s paradigmatic AI systems use sets of training data to learn relationships between inputs (such as a text description of an image) and outputs (such as the image). Inputs are fed into one side of an artificial neural network, and outputs are collected from the other side. Between the two ends, the relationship between inputs and outputs is captured by a network of billions of connections – a sort of complex mental map – between so-called hidden-layers of the neural network. These connections instantiate the logical relationships between inputs and outputs. Unfortunately, the number (billions) and abstract nature (numbers) of these connections makes it almost impossible to understand why an AI system produces any given output. In other words, current AI systems often lack transparency and explainability.
Consider the second set of principles: AI systems should be human-centric, privacy-preserving, and supportive of personal liberty. In short, systems should assist – not replace – human decision-making. Of course, the use of almost any technology poses a certain risk of the atrophy of human capabilities; frequent use of a calculator could lead someone to forget how to do basic multiplication. The human capabilities at risk due to AI development, however, are more important. AI systems that recommend certain products or deliver certain types of information tend to influence human choices. Abuse of these technologies could lead to atrophy of our ability to make uninhibited and value-oriented decisions.
Human-Centric AI
Reordering and reinterpreting these principles to some extent, we can say that AI systems should be human-centric, where human-centric is understood as understandable to human reason and enabling of human freedom. Systems that are transparent and explainable can be used rationally. Systems that preserve private spaces for personal decisions and include the use of human feedback can be used with liberty. In other words, human-centric AI bears the imprint of its creators, rather than wresting away from them their essential capabilities.
Unfortunately, this becomes a useless consideration if we fail to recognize these essential human capabilities. If the human mind is a mere calculator, there is no reason to fight against its replacement by artificial minds. Along these lines, the last article in this Special Issues argues that the ultimate ethical dilemma is anthropological: is man no more than a neural network?
AI Thinks. But Do We?
“It’s not artificial intelligence that has learned to think like us,” argues Professor Riccardo Manzotti. “Rather, we have stopped thinking like people…” Manzotti is a professor of theoretical philosophy at IULM University (Milan) and an Executive Editor of the Journal of Artificial Intelligence and Consciousness.
Manzotti agrees that “an impressive number of articles have been written about the possible benefits and risks of these technologies… intellectual property rights, the implications for education, etc.” But those issues are not AI-specific. The question uniquely posed by AI is: “Are we about to be surpassed by artificial intelligence in exactly that which we thought to be our most essential capacity: that is, to think?” Or, to place the question in the context of the previous articles: will the systems that we develop wrest from us our most human capabilities?
Unfortunately, Manzotti suggests, reductionist philosophies of mind have undercut our ability to appreciate the difference between human and artificial intelligence. The problem lies not in attributing too much to artificial intelligence, but in an attributing too little to human intelligence. “We have reduced thought to calculations, operations, recombination… manifestation of symbols.” We see this line of thought “from the Turing Machine to Wittgenstein’s linguistic games, from the linguistic turn to contemporary artificial intelligence.” And if everything is words, symbols, and numbers, we are no match for AI. AI can manipulate information better than we can.
What can humans do that AI cannot? Manzotti’s answer is that we apprehend reality. Human thought is not an arbitrary construction or representation, but a “manifestation of reality.” In other words, “thought is not a flux of concepts nor a sequence of operations, but the point in which reality manifests itself. Thought acquires significance if it is illuminated by reality. Thought cannot be reduced to an algorithm, but is not, for this reason, less true. The meaning of our words does not depend on the correctness of our grammar, but on the reality that they express through language.” The source of the intelligibility of human thoughts and the goodness of human desires is in reality, itself.
Certainly, AI has reminded us that language plays a powerful role in thought. The large language models behind ChatGPT have shown us that complex manipulation of language can generate human-like outputs. But language derives from thought, not the other way around. Without thought and its link to reality, language is meaningless. As Manzotti writes, “if AI were to write Hamlet, word for word, it would be nothing more than a combination of symbols. Dust, and not a statue.”
We invite readers to consider this and other dimensions of AI ethics through the current Special Issue. In my own opinion, Manzotti’s perspective offers a philosophical foundation for AI principles such as transparency, explainability, responsibility, privacy, and liberty. We need to think about how to use artificial intelligence, but we also need to understand the uniqueness of human intelligence.
Jeffrey Pawlick