Does Artificial Intelligence Change Our Understanding of the Imago Dei?
- Fields of Study
3.1 AI as sophisticated mechanics
3.2 AI with some form of interiority
1. Introduction and Hypothesis
Imago Dei, or “Image of God”, the idea that human beings are made in the image and likeness of God, is a key concept in Jewish and Christian theologies, and has been used to justify the dignity of all human beings and as a basis for human rights. Being made in the image of God separates humanity from the rest of creation and makes us unique among the rest of created beings. Original sin arguably tarnishes this gift but only in part.
However, it is unclear in what sense we should understand this uniqueness or similarity between human beings and God. Since medieval times, rationality – or related capacities such as free will or consciousness – has been identified as the cornerstone of human uniqueness, and hence as a mark for the substantive theology that applied to human condition the label of “Image of God”.
Yet, many capacities that were traditionally considered as a manifestation of our rationality and only achievable by humans (such as identifying the faces of other human beings, summarizing texts, or playing chess) have now been programmed into machines. What can Artificial Intelligence (AI) teach us about human uniqueness, and what are its implications for theology?
I propose the possibility of two very different interpretations. On the one hand, we could assert that the tasks that have been mastered by the machine are mechanical in nature and lack a key factor: subjective experience, which materializes in understanding, emotion, or intentionality. If this is the case, then human uniqueness would not be so dependent on rationality but on subjective experience and related features. This opens the way for deeper discussions regarding the dignity of animals, given that they do not share our rational capacities to the same degree. However, despite this difference, it could be argued that we have in common the most crucial factor, namely the capacity for subjective experiencing.
Another option would be to accept that AI will, one day, share some of our main capabilities, perhaps even the key ones. This is the position of some techno-optimists. If we take this position seriously, we should ponder whether machines could possess the image of God indirectly by being our creation and being made in our image. This would lead us to accept a position of “created co-creator”, a term coined by Philip Hefner (Hefner 2019), but with the implication that human beings would lose its uniqueness, as the machines would in turn participate in the process of creation as well.
Thus, the development of AI raises questions concerning traditional Christian anthropology, according to which humans have been created in the image of God and retain such imprint despite sin; but at the same time, AI offers an opportunity to revise that principle, enabling a more accurate and updated version able to integrate aspects formerly neglected.
2. Fields of Study
2.1 Imago Dei
Imago Dei is the concept, held in Judaism, Christianity, and partly in Islam, that holds that human beings are created in the image and likeness of God. Although this idea appears in many other passages in sacred texts, the book of Genesis remains the main source.
And God said: ‘Let us make man in our image, after our likeness; and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth.’ And God created man in His image, in the image of God He created him, male and female created He them. And God blessed them; and God said to them: ‘Be fruitful, and multiply, and fill the earth, and subdue it; and have dominion over the fish of the sea, and over the fowl of the air, and over every living thing that creepeth upon the earth. (Gen 1:26–28).
Imago Dei justifies the absolute dignity of human beings, meaning that they should be viewed as ends and never as means. However, the exact meaning of being created in God’s likeness is unclear, and many interpretations have been proposed. The common theme among them is the recognition that human beings possess some special quality that allows God to be made manifest in humans. Human beings can actively participate in creation and establish a personal relationship with God by virtue of their spiritual capacities, which for most are intimately linked to our intellectual abilities.
Augustine believed that the mind was the location of humanity, and therefore the location of the image of God (McGrath 2012). He proposed a trinitarian definition of the image of God based on memory, intellect and will (therefore, the triune nature of God is also reflected in a triune nature of human beings) (Augustinus 1977). Thomas Aquinas locates the image of God in intellectual nature, or reason, and insists that only those who love God perfectly possess this image (De Aquino and Caramello 1962). This idea is also present in Calvin and Luther, who state that Imago Dei was lost after the Fall, “Man lost the image of God when he fell into sin” (Luther’s Large Cathecism, art. 114).
In more recent times, free will and the relationship with God have been proposed as the main aspects of Imago Dei by theologians such as Emil Brunner and Karl Barth, and philosopher-theologian Paul Ricoeur. Brunner said that “freedom is what differentiates humanity from the lower creation” (Brunner 2014). Ricoeur identified Imago Dei with “the very personal and solitary power to think and to choose; it is interiority (Ricoeur and Gingras 1961). Barth and Brunner also insisted that what makes us godlike is the ability to form relationships. This has been known as the relational interpretation of Imago Dei and it has gathered strong support in contemporary theology. is the one that has the most support nowadays.
Another view, defended by J. Richard Middleton has argued that the most consistent interpretation of Imago Dei in the Book of Genesis, taking into account the context in which it was written, is that “the Imago Dei designates the royal office or calling of human beings as God’s representatives or agents in the world” (Middleton 1994, 12). In the same way that ancient kings relied on God as a justification of their power, so does humanity justify its power and dominion over creation through the role as God’s representative on earth. This is known as the functional interpretation of Imago Dei. This view has profound implications with respect to the role of humankind as carers of creation. However, the functional view has been criticised from the perspective of disability theology, given that it seems to imply that disabled people are not fully participating in the image of God by virtue of not fulfilling the stipulated function(Deland 1999; Eiesland 1994).
Transhumanism, the intellectual movement that proposes that human beings can and should exceed their limitations by means of technology, has further complicated our understanding of the Imago Dei, in the way that transhumanism encourages the, the modification of human nature (human enhancement) by pharmacological means, genetic manipulation, nanotechnology or the integration with machines. Technology can, thus, enable humans to modify their own nature, and this causes concern for some as it could entail altering the Imago Dei.
On the contrary, for others, transhumanism and notions such as the cyborg (a machine-human hybrid) present an opportunity to update our understanding of human nature. For Donna Haraway, cyborgs are “creatures simultaneously animal and machine, who populate worlds ambiguously natural and crafted” (2006, 149). However, she continues to explain that, given our current integration with technology, we are already “all chimeras” (Haraway 2006, 150). This resonates with the dynamic theological anthropology proposed by Hefner, with the central concept of the created co-creator. Humans are created by God to be co-creators in the creation that God has purposefully brought into being (Hefner 2019).
2.2 Artificial Intelligence
AI is a wide-ranging branch of computer science concerned with building machines capable of performing tasks that typically were only attainable to humans. AI has had remarkable success in several of these fields. Some of them are discussed below.
Expert Systems emulate human reasoning. In many instances, they integrate a knowledge base (a set of claims that are known to be true) with an inference engine that applies the rules of logical inference to find any other claims that are also true given the premises. Expert Systems can also work based on examples or accommodate facts that are not crisply true or false.
Machine Learning (ML), probably the most relevant concept in AI, consists in computer algorithms that can improve themselves through experience. Two main types of ML exist. Unsupervised Learning consists in finding patterns in the input data. For instance, being able to classify animals based on their characteristics would be an unsupervised learning problem. Some transhumanist authors, such as Kurzweil, have argued that the human mind is essentially a pattern-recognition system (Kurzweil 2012), and that since machines are able to solve pattern-recognition problems there are good grounds for anticipating a hybrid between the two.
Supervised learning requires not only data but a human that has labelled the inputs. This includes both classification and regression. In classification problems, the category to which something belongs needs to be identified. For instance, classifying pictures of animals according to their species would be one of these problems. Regression produces a relationship between inputs and outputs and can be used to forecast how the outputs will change when the inputs evolve. Systems that predict stock prices or electricity demand are examples of prediction problems.
Reinforcement Learning can create a basic strategy for a problem and update it iteratively based on “rewards” for successful outcomes and “punishments” for unsuccessful ones.
Natural Language Processing (NLP) allows machines to extract knowledge directly from written sources. This discipline is applied within text mining or machine translation, as well as other techniques such as sentiment analysis, which identifies the affective state associated with the text. Along these lines, machines are also able to recognise and display emotions. For instance, we already have robots that can mimic the facial expression of the human they are interacting with (Fasel and Luettin 2003).
Another field in which machines have been extremely successful is machine perception, where inputs from sensors are analysed to deduce aspects of reality. This includes application fields such as computer vision or speech recognition. Until very recently, it was believed that the level to which human beings can recognize fellow humans could never be achieved by machines; now they are routinely outsmarting humans in these tasks. This seems to be a trend: machines are able to solve tasks previously reserved for humans. Deep Blue beat Kasparov, and we no longer think twice about the superiority of machines in a game which was previously thought to reflect the peak of human intellect.
Other very interesting examples of AI performing previously unthinkable tasks include conversational robots, that converse with humans with either a specific goal (for instance, an online customer support system) or the general goal of entertaining. Some believe that similar chatbots could be close to beating Turing’s test (that is, being indistinguishable from a human conversation partner).
There has also been a rise in art that relies on programmed processes to generate creations (Colton 2012; George Farouk 2019), including paintings or music composed by AI. Along the same lines, there are “joke engines” that build jokes that are often rated as funny by humans.
Some transhumanist authors, such as Ray Kurzweil, take these examples to be of particular significance. Such authors maintain that machines will eventually learn all the complex behaviours displayed by humans (Kurzweil 1999). What would be the impact of this on our understanding of being made in the image and likeness of God?
A first step in the reflection would be to discern more precisely what the tasks are that have been mastered by the machines.
3.1 AI as sophisticated mechanics
While AI is mechanical in nature, it can be considered very sophisticated. Yet, subjective experiencing, or in Ricoeur’s term ‘interiority’, is missing. After all, perfecting the rules of grammar or storing definitions is not the same thing as grasping meaning. When we use language, we do not merely exchange symbols, but we do so to communicate some meaning. That is, a machine that correctly manipulates symbols (such as in John Searle’s Chinese room experiment) is not equivalent to a human speaking that language. Automated translators that have achieved striking levels of performance, and could very soon perform better than a human translator, are not able to understand the texts they produce Similarly, when “empathetic robots” convey empathy through facial expressions , it does not necessarily mean that they experience such feelings of compassion. Being able to generate art is not equivalent to experiencing it.
Some refer to this as weak AI versus strong AI, where the former refers to solving specific problems and the latter refers to interiority. It should be noted that this is related, but not equivalent, to the distinction between specific and. general AI: specific AI is designed to solve a particular problem and general AI can tackle any given task that a human can master. However, being able to perform any task is not equivalent to possessing interiority.
This interiority materializes in qualia at the lowest level (the subjective qualities of experience, such as the redness of a poppy) and includes emotion, understanding or volition – if volition is the key, then free will could be understood as the essence of Imago Dei.
Intentionality is always grounded in a context. This point has also been clearly stated by Cantwell Smith in his recently published book The Promise of Artificial Intelligence (Smith 2019), where he talks about the “aboutness” or reference that is implied by any intelligent action.
If interiority or aboutness, and not rationality, constitute the core of the Imago Dei, then several conclusions might be obtained. The first one is that the focus on abilities, at least quantitatively speaking, is misguided. If this is the case, it would be necessary to revisit our understanding of animal dignity: although animals do not enjoy rationality – at least not to the same degree – they do possess some form of interiority. This opens the door to understanding the Imago Dei in terms of gradation, with animals participating in the image of God to a lesser extent. It should be noted that viewing the Imago Dei through the lens of gradation could be theologically dangerous if applied on humanity. I previously discussed this problem in light of the functional interpretation of the Imago Dei and its implications for disabled persons.
The focus on interiority points to what AI is missing. However, interiority pertains to a domain that cannot be directly accessed. In addition, the existence of simulation methods makes it possible to face something that lacks an interiority but appears to behave in the same manner as a person who has it. As discussed above, it is conceivable that conversational robots or chatbots will soon be able to pass the Turing test (that is, to appear as human from a human perspective). For many authors, including a large list of transhumanists, passing the Turing test is equivalent to being conscious. However, this is clearly not the case: drafting translations efficiently is not equivalent to grasping the meaning of a text, in the same manner that mimicking the facial expression of a human is not equivalent to true empathy. As I argued in (Lumbreras 2017b), the distinction between essence and appearance is key in this context. Machine Learning is able to mechanically learn patterns, and these patterns can be anything – from what the next stock price will be to tricking a human into believing that a chatbot is actually a human. The machine learns to deceive, so that it will be capable of displaying any external behaviour that the human judge associates with interiority. We can illustrate this by thinking of an AI as a cockatoo: it can be trained to articulate words. After extensive training, it can learn to utter, in perfectly comprehensible language, sentences like ‘I love you’ or ‘I miss you’. However, we would not be fooled to think that the animal understands or feels what it is saying. We need a different criterion to evaluate the presence of interiority.
In (Lumbreras 2017b), I proposed emergence as a key idea for navigating this question. When the cockatoo articulates the sentence ‘I missed you’, that sentence has been learned by imitation. The animal has not previously learnt lexicon and grammar – as a human baby would do – and then perfected its skills until it could express the deep feeling of missing its owner. The behaviour was imposed, it did not emerge. Along the same lines, when a robot imitates the facial expression of a person this behaviour has been carefully crafted by a programmer, or has been learned from a ML problem which had as its target the approval of the human. It did not emerge from feelings of empathy and genuine care. Thus, understanding the process of how each external behaviour (appearance) emerges is useful to discern whether it is reasonable to assume that interiority exists, although as explained below the certainty about this matter is out of our reach.
We can infer several implications from the discussion above. Simulated reality is ontologically different from reality as such. If the emergence of novel properties is key, then this difference lays in what David Chalmers defines as strong emergence. (Chalmers 2002). Let us recall that weak emergence can be simulated from lower-level properties, while strong emergence cannot. It should be noted that the definition of strong emergence is in part subjective, as the ability to identify the simulated emergent properties depends on the data that we input in such simulation.
When we appreciate interiority as the key to understanding Imago Dei, intellectual abilities become irrelevant: what makes us human is not playing chess well but enjoyment of strategizing; not writing poetry but being able to experience the feelings that it evokes. This is a deep challenge to how we understand our personal worth, which is linked to productivity in our current societies. However, this definition and speaks about the value of contemplation over action.
A related topic, and key for truly understanding the meaning of the term “created co-creator” , is creativity. A task that Adam could do, that not even angels could undertake, was giving names to creatures. If we assume that AI is mechanical in nature, how can we understand art and creativity in such a context? As explained in the previous section, AI-produced art is ubiquitous. For example, there are AI-generated paintings based on a particular style of a given artist, and symphonies have been composed by machines through following the rules of harmony and the given structure of a chosen genre (Zhan, Dai, and Huang 2019). Our understanding of creativity in science and technology has also been affected by these developments, as we now have theorem proofs that have been identified by computers (Paulson 1994) and code that improves itself (Yampolskiy 2015).
Our understanding creativity has been immensely complicated by the success of AI systems. In order to remain consistent with the hypothesis that we are pursuing in this section, we could say that all those examples are mechanical in nature and not true displays of creativity, such that something radically new is produced. In all the mentioned cases, the product of AI is due to rules that were ultimately fixed by a human programmer. However, the same could be said about most human art production, as only a few artists break the rules or find new forms of expression. Engineers and scientists too mostly work with known methods and tools, and so the space of creativity is reduced. Interiority can be a very useful concept here, which reminds us that creating is not enough: true creativity has a purpose, and it is this purpose that ultimately distinguishes works of art from technological advances
Another interesting ramification of this discussion is the possibility of building ethical machines. It is possible to embed ethical principles in machines, from relatively simple rules to structures that can weigh different outcomes and probabilities to form a consequentialist framework (Lumbreras, 2017a). It is also possible, with current technology, to build machines that could help with decision-making, not only in a profit-maximizing or risk-minimizing sense (as they do nowadays), but also in a moral sense. Machine Learning predicts outcomes based on examples, so it is perfectly possible to build an algorithm that would aid in ethical decision making, provided that sufficient data is given. These machines could be trained to support people in resolving difficult issues, both individual decisions and political matters. However, it is important to emphasise that Machine Learning outputs would provide general guidance rather than concrete decisions. If trained in a moral manner, Machine Learning could enhance our wisdom and ability to exert power over creation, thus rendering us, following a functional view of the Imago Dei, better bearers of God’s image.
We can also understand AI, if used properly, as an especially useful mechanical tool, . AI could be understood as an extension to human cognition, with the potential of helping us to discover hidden patterns and, thus, making new science possible. In this sense, AI could be seen as improving our rational capacity, hence intensifying Imago Dei in the traditional, Augustinian view.
3.2 AI with some form of interiority
In the previous section, we argued that AI was merely a more sophisticated form of mechanical calculation. However, it is also possible, as many tecno-optimist and transhumanist authors believe, that machines will also possess some form of interiority. For instance, Kurzweil proposes that machines will very soon exhibit consciousness, free will, and the ability to love, so they will be more properly called spiritual machines (Kurzweil 1999). Many transhumanists, including Kurzweil, are patternists – a form of reductionism which states that the essence of reality is constituted by information. Thus, for instance, copying and transferring the patterns of the neural connections in the brain into a simulation would give us the equivalent functions of the brain, including consciousness and identity. Indeed, some have already laid out proposals on how to build a mind (Kurzweil 2012).
I would argue that current technology, allows for systems that can mimic interiority but not possess it – as exemplified by the discussion about the cockatoo above. However, it would be perfectly reasonable to leave the door open to other, different technologies that might allow for the emergence of interiority, including Quantum computers. Maybe the technology will be different, but given that we have already seen how AI first performs and then outperforms humans in what seemed to be uniquely human tasks, how can we be sure that AI will never develop some form of interiority?
As the machines would be made in our image, we should ponder as to whether the machines would bear the image of God indirectly by being made in humanity’s image. In this case, the machines would share the Imago Dei. Depending on how they are built, they could share Imago Dei in an obscured, mediated manner.
If machines were to develop an interiority, it would also be possible that they surpass us not only in cognitive abilities, and in terms of stewarding creation wisely, but also in their relationship with God. Natasha Vita-More, for instance, argues that machines would be free of the vulnerability and scarceness that is bound to our physicality. This would allow machines to love without jealousy and be generous without fear (Lumbreras 2019). Dualistic views, according to which the body is seen as a source of sin, would support the idea that if an AI is free of physicality, it should be easier for it to express moral goodness. This could also lead to spiritual machines forming deeply intense relationships with God, perhaps even more profound compared to human beings, such that they better express the relational interpretation of Imago Dei.
Hefner has worked extensively on developing and applying the notion of the created co-creator. In his dynamic anthropology, humans are created by God to be co-creators in the world that God has purposefully brought into being (Hefner 2019). This would be the key for understanding Imago Dei: in the same way that God creates and sustains the universe, human beings participate in this creation. In the scenario where machines possess interiority, they would also be participants of the process of creation. Therefore, humanity would lose its uniqueness by performing the most radical act of creativity we could conceive, as we would create something that would be comparable to us.
The possibility of spiritual machines leads to many questions that should be addressed further: How do we want these machines to be? If machines are created in humanity’s image, how do we want to design such machines? How should we relate to them, in terms of cooperation or competition?
The development of AI raises questions concerning the traditional understanding of Imago Dei in Christian anthropology. Two interpretations of the advances of AI are possible: either they merely reflect mechanical processes, albeit sophisticated, or they hint at the possibility that all human capabilities can someday be attained by machines.
If we take the first position, then purely intellectual activities should be de-emphasized in favour of subjective experience. The importance of rationality, as emphasised in the Augustinian view of Imago Dei, should be de-prioritized: interiority would matter rather than external function. In addition, the fact that animals possess interiority to a certain extent makes it necessary to revisit our relationship with them. The attempt at establishing a gradation in Imago Dei, with animals participating in it to a lesser extent depending on their capabilities, is problematic. This has also been remarked by authors critiquing the functional understanding of Imago Dei from the perspective of disability theology. In addition, if an AI was trained in terms of moral decision-making, it could aid humanity in caring for creation, and hence support humanity to better reflect the Image of God (in the functional sense).
However, if we affirm that the developments of AI demonstrate that any human capability is attainable by the machines, then our unique role in creation will need to be shared with the spiritual machines. The machines would share Imago Dei, even according to the relational interpretation, at least to a degree depending on their specific characteristics. If successfully built one day, humans would have performed the most radical act of creativity, fulfilling their role as, in Hefner’s terminology, created co-creators. The machines, in turn, would join us as a second generation of created co-creators. This possibility opens new questions regarding how these new beings should be designed and how we should relate to them. Thus, the dialogue between the concept of Imago Dei and recent developments in AI have deep implications for Christian anthropology, morality and our relationship with technology, and a deeper reflection on this topic can lead to fruitful outcomes.
Saint Augustine (of Hippo), Hill, Edmund, and John E. Rotelle. 1991. The Trinity. Hyde Park, NY: New City Press.
Brunner, Emil. 2014. The Christian Doctrine of Creation and Redemption: Dogmatics: Vol. II. Eugene, OR, USA: Wipf and Stock Publishers.
Chalmers, David J. 2002. Varieties of emergence. Department of Philosophy, University of Arizona, USA, Tech.Rep./preprint,
Colton, Simon. 2012. “The Painting Fool: Stories from Building an Automated Painter.” In Computers and Creativity, edited by Ian McCormack and Mark d’Inverno, 3-38. Berlin and Heidelberg: Springer.
De Aquino, Thomas, and Pietro Caramello. 1962. Summa Theologiae. Madrid, Spain: Editorial Católica.
Deland, Jane S. 1999. “Images of God Through the Lens of Disability.” Journal of Religion, Disability and Health 3: 47-81.
Eiesland, Nancy L. 1994. The Disabled God: Toward a Liberatory Theology of Disability. New York, USA: Abingdon Press.
Fasel, Beat, and Juergen Luettin. 2003. “Automatic Facial Expression Analysis: A Survey.” Pattern Recognition 36: 259-275.
George Farouk, David. 2019. “AI Music Composer Using Machine Learning Technology.” October University for Modern Sciences and Arts Repository.
Haraway, Donna. 2006. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century.” In The International Handbook of Virtual Learning Environments, edited by Joel Weiss, Jason Noland, and Jeremy Hunsinger, 117-158. Berlin and Heidelberg: Springer.
Hefner, Philip. 2019. “Biocultural Evolution and the Created Co-Creator.” In Science and Theology, edited by Ted Peters, 174-188. New York: Routledge.
Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York, USA: Penguin.
Kurzweil, Ray. 2012. How to Create a Mind: The Secret of Human Thought Revealed. New York, USA: Penguin.
Lumbreras, Sara. 2019. “El posthumano podría ser más refinado emocionalmente que las personas actuales»: Una entrevista con natasha vita-more.” Razón y Fe 280: 255-261.
Lumbreras, Sara. 2017a. “The Limits of Machine Ethics.” Religions 8: 100.
Lumbreras, Sara. 2017b. “Strong Artificial Intelligence and Imago Hominis: The Risks of a Reductionist Definition of Human Nature.” In Issues in Science and Theology: Are We Special? edited by Michael Fuller, Dirk Evers, Anne Runehov, and Knut-Willy Saether, 157-168. Berlin and Heidelberg: Springer.
McGrath, Alister E. 2012. Historical theology: An Introduction to the History of Christian Thought. Chichester, West Sussex: John Wiley and Sons.
Middleton, Richard J. 1994. “The Liberating Image? Interpreting the Imago Dei in Context.” Christian Scholars Review, 24: 8-25.
Paulson, Lawrence C. 1994. Isabelle: A Generic Theorem Prover. Berlin and Heidelberg: Springer Science and Business Media.
Ricoeur, Paul, and George Gingras. 1961. “‘The Image of God’ and the Epic of Man.” CrossCurrents 11: 37-50.
Smith, Brian Cantwell. 2019. The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge, MA: MIT Press.
Yampolskiy, Roman V. 2015. “Analysis of Types of Self-Improving Software.” International Conference on Artificial General Intelligence, edited by Jodi Bieger, Ben Goertzel, and Alex Potapov, 384-393. Berlin and Heidelberg: Springer
Zhan, Hao, Dai, Lingfeng, and Zhiwei Huang. 2019. “Deep Learning in the Field of Art.” Proceedings of the 2019 International Conference on Artificial Intelligence and Computer Science, 717-719.
Cite this article
Lumbreras, Sara. 2022. “Does Artificial Intelligence Change Our Understanding of the Imago Dei?” Theological Puzzles (Issue 5). https://www.theo-puzzles.ac.uk/2022/01/13/lumbreras/.
Reply to this Theological Puzzle
Disagree with the conclusions of this puzzle? Did the Author miss something? We encourage readers to reply via a ‘Note’ of up to 2000 words. Notes do not need to follow the puzzle structure. See Contribute for more information. An honorarium may be payable.