Home » Ethics » – What does it mean to be human?

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– What does it mean to be human?

John Wyatt is a doctor, author and research scientist. His concern is the ethical challenges that arise with technologies like artificial intelligence and robotics. On Tuesday this week (11th March 2019) he gave a talk called ‘What does it mean to be human?’ at the Wesley Methodist Church in Cambridge.

To a packed audience, he pointed out how interactions with artificial intelligence and robots will never be the same as the type of ‘I – you’ relationships that occur between people. He emphasised the important distinction between ‘beings that are born’ and ‘beings that are made’ and how this distinction will become increasingly blurred as our interactions with artificial intelligence become commonplace. We must be ever vigilant against the use of technology to dehumanise and manipulate.

I can see where this is going. The tendency for people to anthropomorphise is remarkably strong - ‘the computer won’t let me do that’, ‘the car has decided not to start this morning’. Research shows that we can even attribute intentions to animated geometrical shapes ‘chasing’ each other around a computer screen, let alone cartoons. Just how difficult is it going to be to not attribute the ‘human condition’ to a chatbot with an indistinguishably human voice or a realistically human robot. Children are already being taught to say ‘please’ and ‘thank you’ to devices like Alexa, Siri and Google Home – maybe a good thing in some ways, but …

One message I took away from this talk was a suggestion for a number of new human rights in this technological age. These are: (1) The right to cognitive liberty (to think whatever you want), (2) The right to mental privacy (without others knowing) (3) The right to mental integrity and (4) The right to psychological continuity - the last two concerning the preservation of ‘self’ and ‘identity’.

A second message was to consider which country was most likely to make advances in the ethics of artificial intelligence and robotics. His conclusion – the UK. That reassures me that I’m in the right place.

See more of John’s work, such as his essay ‘God, neuroscience and human identity’ at his website johnwyatt.com

John Wyatt


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into Artificial Intelligent Systems (AISs)? One answer is to design architectures for AISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A computer program, or machine learning algorithm, may be excellent at what it does, even super-human, but it knows almost nothing about the world outside its narrow silo of capability. It will have little or no capacity to reflect upon what it knows or the boundaries of its applicability. This ‘meta-knowledge’ may be in the heads of their designers but even the most successful AI systems today can do little more than what they are designed to do.

Any sophisticated artificial intelligence, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. It attempts to identify how people acquire and use knowledge to act with the broadly based intelligence that current artificial intelligence systems lack.

As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

Reconciling Contradictions

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

Belief Systems

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

Human Operating System

Understanding how we form expectations; identify anomalies between expectations and current interpretations; generate, prioritise and generally manage intentions; create models to predict and evaluate the consequences of actions; manage attention and other limited cognitive resources; and integrate knowledge from intuition, reason, emotion, imagination and other people is the subject matter of the human operating system.  This goes well beyond the current paradigms  of machine learning and takes us on a path to the seamless integration of human and artificial intelligence.

%d bloggers like this: