Home » Ethics » – A Changing World: so, what’s to worry about?

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– A Changing World: so, what’s to worry about?

A World that can change – before your eyes!

I’ve been to a couple of good talks in Cambridge (UK) this week. First, futurist Sophie Hackford (formally of Singularity University and Wired magazine) gave a fast-paced talk about a wide range of technologies that are shaping the future. If you don’t know about swarms of drones, low orbit satellite monitoring, neural in-plants, face recognition for payments, high speed trains and rocket transportation then you need to, fast. I haven’t found a video of this very recent talk yet, but the one below from a year ago gives a pretty good indication of why we need to think through the ethical issues.

YouTube Video, Tech Round-up of 2017 | Sophie Hackford | CTW 2017, January 2018, 26:36 minutes

The Age of Surveillance Capitalism

The second talk, in some ways, is even more scary. We are already aware that the likes of Google, Facebook and Amazon are closely watching our every move (and hearing our every breath). And now almost every other company that is afraid of being left behind is doing the same thing, But what data are they collecting and how are they using it. They use the data to predict our behaviour and sell it on the behavioural futures market. Not just our computer behaviour but they are also influencing us in the real world. For example, apparently Pokamon Go was an experiment originally dreamed up by Google to see if retailers would pay to host ‘monsters’ to increase footfall past their stores. The talk by Shoshana Zuboff was at the Cambridge University Law Faculty. Here is an interview she did on radio the same day.

BBC Radio 4, Start the Week, Who is Watching You?, Monday 4th February 2019, 42:00 minutes
https://www.bbc.co.uk/programmes/m0002b8l


1 Comment

Leave a comment

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How do we embed ethical self-regulation into artificial Autonomous, Intelligent Systems (A/ISs)? One answer is to design architectures for A/ISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A simple computer program will know very little about the world, and will have little or no capacity to reflect upon what it knows or the boundaries of its applicability.

Any sophisticated A/IS, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

%d bloggers like this: