Home » Robot Ethics – Video links

Subscribe via Email

Join 2 other subscribers.

Robot Ethics – Video links

Audio – Video – Written Material – Organisations

A collection of (mainly) Youtube videos that address issues in robot ethics.


Ethics and Artificial Intelligence
Dr Michael Wilby, Anglia Ruskin University

Dr Michael Wilby, Lecturer in Philosophy, provides an overview of normative ethics and its implications for artificial intelligence in the short, medium and long term.

Youtube video, Ethics and Artificial Intelligence – Dr Michael Wilby, robotethics.co.uk, July 2018, 25:48 minutes


What Can Machine Learning Do? Workforce Implications – Prof. Erik Brynjolfsson

This video sets out the scene, mainly in the US but also globally. It is full of graphs and evidence to validate the thesis that workers performing routine tasks have suffered badly as a result of automation. The distribution of wealth is rapidly becoming more unequal in a ‘winner takes all’ economy. Owners of capital are advantaged over people who only have their labour to sell. Robots are already affecting wage rates for human labour. We can counter this trend by using technology to augment human labour and creating more jobs that capitalise on human capacities (empathetic health care, therapy).

Youtube video, What Can Machine Learning Do? Workforce Implications – Prof. Erik Brynjolfsson, The Artificial Intelligence Channel, May 2018, 56:27 minutes


UK Parliament – Artificial Intelligence Committee
Tuesday 12 December 2017 Meeting started at 3.34pm, ended 5.58pm

Part 1
Witnesses: Professor Rosemary Luckin, Professor of Learner Centred Design, University College London, Mr Miles Berry, Principal Lecturer, School of Education, Roehampton University, Mr Graham Brown-Martin, Author and entrepreneur.

Part 2
Witnesses: The Rt Hon. Matt Hancock MP, Minister of State, Department for Digital, Culture, Media and Sport, The Rt Hon. the Lord Henley, Parliamentary Under Secretary of State, Department for Business, Energy and Industrial Strategy

Video, UK Parliament, Artificial Intelligence Committee, parliamentlive.tv, 12 December 2017, parts 1 followed by part 2, 2:23:48 hours


Robot Ethics in the 21st Century

with Alan Winfield and Raja Chatila

‘How can we teach robots to make moral judgements, and do they have to be sentient to behave ethically?’ Alan Winfield is a Professor of Robot Ethics at the University of the West of England (UWE) in the Bristol Robotics Lab. Raja Chatila is Professor at the University Pierre and Marie Curie, Paris. Ethical issues identified include: the effects of automation on jobs, emotional attachment of people to machines, anthropomorphism, responsibility for harm, and autonomous weapons.

Youtube video, Robot Ethics in the 21st Century – with Alan Winfield and Raja Chatila, The Royal Institution, November 2017, 35:25 minutes


Building Artificial Intelligence That is Provably Safe & Beneficial

Stuart Russell

Stuart Russell advocates probabilistic programming languages and demonstrates how they can tackle problems using a deeper understanding of the world than that created by stand-alone machine learning algorithms. He is impressed by the physical capabilities of robots and current AI’s ability to solve some problems. However, he critiques the current state of the art in AI as falling far short of human capabilities. However conceptual breakthroughs are unpredictable and we need a formal framework to mitigate the risks posed by AI. He proposes that robots should always consult a human about whether to go ahead with some action and switch-off if it doesn’t get the go ahead. There is a strong commercial and moral incentive to integrate human values into AI. They will not be accepted unless they have the capacity to learn and align with human values and objectives, and always put these first.

Youtube video Prof. Stuart Russell – Building Artificial Intelligence That is Provably Safe & Beneficial,The Artificial Intelligence Channel, September 2017, 1:05:13 hours


Deep learning in The Brain

Blake Richards

This lecture helps convince me that it will not be so long (say 5-10 years) until we have the knowledge to build robot brains that are comparable in their capacity to learn as people. It does a detailed comparison between neural networks and the way in which synapses in the brain change in response to experience. It’s a technical talk and it’s detail suggests that at least the theory will be in place in the relatively near future, even if the processing power may not be sufficient to do this within a mobile robot brain (as opposed to a remote supercomputer).

If this timescale is realistic then there is not much time to address the ethic issues that will ensure that the basic operating systems of such robot brains cannot be ‘trained’ (or weaponised) to do harm or to do harm inadvertently.

Youtube video, Blake Richards – Deep learning in The Brain, The Artificial Intelligence Channel, September 2017, 1:23:37 hours


3 principles for creating safer AI

Stuart Russell

‘How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.’

Russell proposes three principles:
1 The robot’s only objective is to maximise the realisation of human values (human preferences)
2. The robot is initially uncertain about what those values are
3. Human behaviour provides information about human values

Youtube video, Stuart Russell: 3 principles for creating safer AI, April 2017, 17:30 minutes


What Is It Like to Be a Robot?

Dr. Leila Takayama examines human encounters with new technologies. Having a person remotely operate a robot-type device that has some autonomous functions, raises issues in human robot interaction, both in how people respond to the robot and in how the operator and the autonomous operations of the robot interleave/negotiate control.

Youtube video, What Is It Like to Be a Robot? | Dr. Leila Takayama | TEDxPaloAlto, TEDx Talks, May 2017,12:55 minutes


Sam Harris & Kate Darling

This discussion looks at some of the consequences of people’s tendency to anthropomorphize even inanimate non-responsive objects like soft toys, and how people might react to robots as they become increasingly humanoid. It goes on to some sexually explicit discussion about ethics in relation to child sex robots.

YouTube Video, Sam Harris & Kate Darling … Conversation on Robot Ethics & AI, Cogent Canine, March 2017, 31:37 minutes


Do Robots Deserve Rights?

What if Machines Become Conscious?

We once justified slavery on the grounds that the slaves would benefit. Might we do the same with robots?

YouTube Video, Do Robots Deserve Rights? What if Machines Become Conscious?, Kurzgesagt – In a Nutshell, February 2017, 6:34 minutes


The Philosophy of Westworld

This sets out the plot of the film Westworld (1973 film and 2016 HBO TV series) and identifies freewill and suffering as preconditions for ethics.

Youtube video, The Philosophy of Westworld – Wisecrack Edition, Wisecrack, February 2017, 17:18 minutes


Don’t Fear Superintelligent AI

Grady Booch

‘New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don’t need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we’ll teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.’

Youtube video, Don’t fear superintelligent AI, TED@IBM, November 2016, 10:19 minutes


Can we build AI without losing control over it?

Sam Harris

‘Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.’

Youtube video, Can we build AI without losing control over it?, TEDSummit, June 2016, 14:26 minutes


Kate Darling – Ethical issues in human-robot interaction

Kate Darling conference talk highlights the tendency for people to anthropomorphize. People project intent onto inanimate objects and will empathise with them. As robots become ubiquitous and more physical, mobile and humanoid the anthropomorphic effect becomes more pronounced. Deceit, privacy, data security, covert selling, and emotional attachment are all identified as issues. These are issues not just for human-robot interactions but also for human interactions with each other.

Youtube video, Kate Darling – Ethical issues in human-robot interaction, Media Evolution / The Conference, January 2016


Nick Bostrom – What happens when our computers get smarter than we are?

‘Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?’

Youtube video, Nick Bostrom – What happens when our computers get smarter than we are?, March 2015



Technology

Predicting the ripple effects of technology
James Burke

The education system lags real-world innovation. James Burke has said that by 2070 the replicator will be here and nano-robots will have built sufficient nano-robot factories that everything we want will be readily available at no cost. There will no longer be a need for manufacturing, retail or people to be part of the production process. Can we predict the ripple effects of technological innovation? Maybe we can only do this using disciplinary teams to cover the gaps where innovation tends to cause the greatest change.

Youtube video, James Burke, Royce Carlton Speakers, October 2016, 7:16 minutes