Philosophy can help us navigate the digital age

The philosopher Charles Ess on how philosophy can help us to find our way in a time where technology is changing everything and create a world that’s worth living in. By Sonia Klug

Image by Craig Dennis

Charles Ess career spans 50 years’ working with philosophy, specialising in digital ethics. He is a professor at Oslo University, an eminent expert on social robots and has worked with professional associations to develop digital ethics guidelines. He is also part of a Swedish, four-year project, sponsored by the Wallenberg Foundations, to help analysethe ethical dimension of the changes digital technologies – and in particular AI – will bring.


With digital technologies changing our lives at such a fast pace, he feels we should take some time to reflect and think critically about how technological developments are affecting and changing us as individuals as well as our society. 

He thinks philosophy has not only the potential to help us to come to terms with the new realities of digital progress as individuals, but also inform tech ethics to make sure that our well-being stays at the centre of new inventions. He says, ‘I am convinced that philosophy has enormous resources to offer: not only for making sense of what is happening around us, but also responding thereto in ways that are more productive than harmful. Or, in the terms of virtue ethics, to find ways of leading good lives of contentment, harmony, and flourishing. Especially virtue ethics and existentialist philosophy address us as mortal, and thereby vulnerable, human beings who must first of all try to come to grips with our mortality and then figure out how we might find or create meaning in our lives. As more and more of those around us drop out of traditional religious beliefs and frameworks for meaning, these approaches offer important alternatives to nihilism and relativism. At the same time, these approaches harmonise nicely with many of the core elements of traditional religious frameworks.’

Charles observes that organisations and computer scientists increasingly look to philosophy to fight for a more human-centric future of technology. For example, the EU and the Institute of Electrical and Electronics Engineers (IEEE), are currently developing ethical guidelines for human-centric AI or as AI4people. He also sees students becoming more interested in these topics, as they try and force action to combat the looming catastrophes of climate change.

Charles feels that tech’s lack of ethics is becoming a widely shared concern, including in the press as well as at grass root level. He says, ‘We underestimate the power of NGOs and local communities have. For example, local communities such as Makers Spaces or science outreach projects help to educate people, giving them the tools for critical thinking. There seems to be a hunger to understand all of this, leading to, for example, philosophy cafes, popping up here and there. I’m hoping that this will help catalyse some institutional manoeuvres and developments. There are definitely sparks of hope.’


Until recently, the main ethics framework deployed by computer scientists was the utilitarian cost/benefit approach. But he feels the discussion is moving away from the very pragmatic and limited approach and towards virtue ethics. He pinpoints the change to the discourse about autonomous vehicles, in particular, the question whose lives a driverless car should be programmed to prioritise: in an accident, should a car try and save the passengers or pedestrians? Should it swerve to avoid a baby, even if that means colliding with an older person? Utilitarianism, which focuses on the greatest good for the highest number of people, proved too limited to come to meaningful conclusions. 

Now the language of human rights is often used in this context. ‘There is a much stronger emphasis on deontology – human rights and respecting human autonomy and dignity,’ says Charles.’ Deontology emphasises human rights and autonomy of the individual, the assumption that human beings are free and have a right to determine how they want to live their lives. 

But the school of thought that has been most influential on tech ethics recently is virtue ethics. This school of thought was first developed by Aristotle and emphasises the moral compass of a person rather than specific actions or rules.

It’s a character-based approach to morality, and more holistic, as it takes intentions into account, rather than relying on more rigid, rule-based approaches to morality. Its ideas inform ethic guidlines, from small organisations to the European Commission’s “Ethics Guidelines for Trustworthy AI”. 

However, Charles thinks there are many other schools of thought we should take into consideration, saying, ‘There is a wealth of global traditions, we have to pay attention to. Confucian thought or the Abrahamic religions, but they [computer scientist and tech ethicists] find that that we can go through a lot of that by virtue ethics, starting with Aristotle. Recently, I have also seen more feminist ethics and in particular ethics of care within engineering communities. When I was in graduate school, feminist ethics were not considered ethics, so this is, I think, quite extraordinary.’

He also thinks that the works of Plato, Aristotle, Kant, Simone de Beauvoir, Hannah Arendt are also worth studying to make sense of the world of today.


But critical thinking is also crucial to define our role as humans in this brave new world. Does he think the ability of machines to do human things, such as communication, medicine and some argue even art, change what it means to be human? He thinks that this very much depends on how we view AI and our beliefs about it. Recognising AI’s limits can help us define what is unique about being human. He says, ‘What can and cannot be replicated in our devices point to a set of capacities, virtues, central experiences – starting with empathy and care, as well as love – that are deeply human.’ 

On the other hand, he worries that the commonly held, but erroneous belief that computers could replace or be better than a human brain, damages our sense of self. He says, ‘I worry about the wide acceptance of such claims [that computers could replace the human brain], which the great majority of us who specialise in these matters do not find tenable. The problem is that even if they’re false – our accepting that they are true, or may become true, results in what I think of as an impoverished sense of being human – one that is less than free in a strong sense, to begin with.’

Rather than constantly scramble to keep up with technology, it’s worth taking some time thinking about all these questions. After all, Charles asks, ‘What’s the good of all of this knowledge, economic growth, etc. if we don’t have much of an informed and reflective clue about how to live good and meaningful lives as human beings?’