The rise of artificial intelligence (AI) asks questions not only about technology and the vast plethora of possibilities it brings, but also about morality, ethics and philosophy. The ushering in of this new technology has implications for healthcare, the law, the military, the nature of work, politics, and even our own identity – what makes us human and how we achieve our sense of self.
“AI morality” (Oxford University Press, 2024), edited by the British philosopher David Edmondsis a collection of essays from a ‘philosophical task force’ exploring how AI will revolutionize our lives and the moral dilemmas it will create, painting a compelling picture of the reasons to be cheerful and the reasons to worry to make. In this excerpt, Muriel Leuenbergera postdoctoral researcher in the ethics of technology and AI at the University of Zurich, focuses on how AI is already shaping our identities.
Her essay, titled “Should you let AI tell you who you are and what to do?” explains how the machine learning algorithms that dominate today’s digital platforms – from social media to dating apps – may know more about us than we do. But, she argues, can we trust them to make the best decisions for us, and what does that mean for our agency?
Your phone and its apps know a lot about you. Who you talk and spend time with, where you go, what music, games and movies you like, what you look like, what news articles you read, who you find attractive, what you buy with your credit card and how many steps you take . This information is already being exploited to sell us products, services or politicians. Online trails allow companies like Google or Facebook to infer your political views and consumer preferences, whether you are a thrill seeker, an animal lover, or a small employer, how likely you are to become a parent soon, and even whether you chances are you are suffering from depression or insomnia.
With the use of artificial intelligence and the further digitalization of human lives, it is no longer inconceivable that AI will get to know you better than you know yourself. The personal user profiles that AI systems generate could become more accurate in describing their values, interests, character traits, biases or psychological disorders than the user themselves. Technology can already provide personal information that individuals do not yet know about themselves. Yuval Harari exaggerates, but makes a similar point when he claims that it will become rational and natural to choose the partners, friends, jobs, parties and homes suggested by AI. AI will be able to combine the vast personal information about you with general information about psychology, relationships, work, politics and geography, and better simulate possible scenarios involving those choices.
So it seems like an AI that lets you know who you are and what to do would be great, not just in extreme cases, à la Harari, but more prosaically for general recommendation systems and digital profiling. I would like to suggest two reasons why this is not the case.
To trust
How do you know if you can trust an AI system? How can you be sure he really knows you and makes good recommendations for you? Imagine a friend telling you that you should go on a date with his cousin Alex because the two of you would be a perfect match. When deciding whether to meet Alex, think about how trustworthy your friend is. You can consider your friend’s trustworthiness (is he currently drunk and not thinking straight?), competence (how well does he know you and Alex, how good is he at judging romantic compatibility?) and intentions (wants to him that you do that) be happy, make fun of you, or ditch his boring cousin for a night?). To see if you should follow your friend’s advice, gently question him: why does he think you like Alex, what does he think you have in common?
This is complicated enough. But judgments about trust in AI are even more complicated. It is difficult to understand what an AI really knows about you and how reliable its information is. Many AI systems have proven to be biased – for example, they have reproduced racial and sexist biases from their training data – so we would be wise not to blindly trust them. Normally, we cannot ask an AI to explain its recommendation, and it is difficult to assess the developer’s reliability, competence, and intentions. The algorithms behind AI’s predictions, characterizations, and decisions are typically proprietary and not accessible to the user. And even if this information were available, it would require a high degree of expertise to understand it. How do those purchasing records and social media posts translate into character traits and political leanings? In fact, due to the much-discussed opacity, or “black box” nature of some AI systems, those skilled in computer science may not be able to fully understand an AI system. The process of how AI generates an output is largely self-directed (meaning it generates its own strategies without following strict rules designed by the developers) and difficult or almost impossible to interpret.
Create yourself!
Even if we had reasonably reliable AI, a second ethical concern would remain. An AI telling you who you are and what to do is based on the idea that your identity is something you can discover: information that you or an AI can access. Who you really are and what you should do with your life can be accessed through statistical analysis, some personal data and facts about psychology, social institutions, relationships, biology and economics. But this view misses an important point: we also choose who we are. You are not a passive subject of your identity; it is something that you create actively and dynamically. You develop, cherish and shape your identity. This self-creationist facet of identity has been central to existentialist philosophy, as exemplified by Jean-Paul Sartre. Existentialists deny that humans are defined by a predetermined nature or “essence.” To exist without essence always means to become different from who you are today. We are constantly creating ourselves and must do so freely and independently. Within the limits of certain facts – where you were born, how tall you are, what you said to your friend yesterday – you are radically free and morally obligated to construct your own identity and define what is meaningful to you. Crucially, the goal is not to discover the only right way to be, but to choose your own individual identity and take responsibility for it.
AI can give you an external, quantified perspective that can act as a mirror and suggest courses of action. But you need to stay in charge and make sure you take responsibility for who you are and how you live your life. An AI can tell you many facts about you, but your job is to figure out what they mean to you and how you let them define you. The same goes for actions. Your actions are not just a way to seek well-being. By your actions you choose what kind of person you are. Blindly following AI means giving up the freedom to create yourself and abdicating responsibility for who you are. This would amount to a moral failure.
Ultimately, relying on AI to tell you who you are and what to do can hinder the skills needed for independent self-creation. If you’re constantly using an AI to find the music, career, or political candidate you like, you might eventually forget how to do it yourself. AI can deskill you not only on a professional level, but also in the very personal pursuit of self-creation. Choosing well in life and constructing an identity that is meaningful and makes you happy is an achievement. By outsourcing this power to an AI, you gradually lose responsibility for your life and ultimately who you are.
A very modern identity crisis
Maybe sometimes you wish someone would tell you what to do or who you are. But as we have seen, this comes at a cost. It’s difficult to know if and when to trust AI profiling and recommendation systems. More importantly, outsourcing decisions to AI may not allow you to meet the moral imperative of creating yourself and taking responsibility for who you are. In this process, you can lose self-creation skills, calcify your identity, and cede power over your identity to corporations and the government. These concerns carry particular weight in cases involving the most substantive decisions and features of your identity. But even in more mundane cases, it would be good to put aside recommendation systems from time to time and be more active and creative when selecting movies, music, books or news. This in turn requires research, risk and self-reflection.
Of course, we often make bad choices. But this has an advantage. By exposing yourself to influences and environments that don’t perfectly match who you currently are, you develop. Moving to a city that makes you unhappy can disrupt your usual rhythm of life and, for example, prompt you to find a new hobby. Constantly relying on AI recommendation systems can calcify your identity. However, this is not a necessary feature of recommendation systems. In theory, they could be designed to broaden the user’s horizons, rather than maximizing engagement by showing customers what they already like. In practice they don’t work that way.
This calcifying effect is amplified when AI profiling becomes a self-fulfilling prophecy. It can slowly turn you into what the AI predicted you would be and perpetuate the characteristics the AI picked up. By recommending products and serving ads, news and other content, you are more likely to consume, think and act in the way the AI system initially deemed appropriate for you. Technology can gradually influence you so that you evolve into who you originally were.
This excerpt, written by Muriel Leuenberger, has been edited for style and length. Reprinted with permission from “AI Ethics,” edited by David Edmonds, published by Oxford University Press. © 2024. All rights reserved.