Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Its All Broken, but we can fix it

Democracy, the environment, work, healthcare, wealth and capitalism, energy and education - it’s all broken but we can fix it. This was the thrust of the talk given yesterday evening (19th March 2019) by 'Futurist' Mark Stevenson as part of the University of Cambridge Science Festival. Call me a subversive, but this is exactly what I have long believed. So I am enthusiastic to report on this talk, even though it is as much to do with my www.wellbeingandcontrol.com website than it is with AI and Robot Ethics.

Moral Machines?

This talk was brought to you, appropriately enough, by Cambridge Skeptics. One thing Mark was skeptical about was that we would be saved by Artificial Intelligence and Robots. His argument - AIs show no sign of becoming conscious therefore they will not be able to be moral. There is something in this argument. How can an artificial Autonomous Intelligent System (AIS) understand harm without itself experiencing suffering? However, I would take issue with this first premise (although I agree with pretty much everything else). First, assuming that AIs cannot be conscious, it does not follow that they cannot be moral. Plenty of artefacts have morals designed in - an auto-pilot is designed not to kill its passengers (leaving aside the Boeing 737 Max), a cash machine is designed to give you exactly the money you request and buildings are designed not to fall down on their occupants. OK, so this is not the real-time decision of the artefact. Rather it's that of the human designers. But I argue (see the right-hand panel of some blog page on www.robotethics.co.uk) that by studying what I call the Human Operating System (HOS) we will eventually get at the way in which human morality can be mimicked computationally and this will provide the potential for moral machines.

The Unpredictable...

Mark then went on to show just how wrong attempts at prediction can be. "Cars are a fad that will never replace the horse and carriage". "Trains will never succeed because women were not designed to travel at more than 50 mile per hour".
We are so bad at prediction because we each grow up in our own unique situations and it's very difficult to see the world from outside our own box - when delayed on the M11 don't think you are in a traffic jam, you are the traffic jam!  Prediction is partly difficult because technology is changing at an exponential rate. Once it took hundreds of years for a technology (say carpets) to be generally adopted. The internet only took a handful of years.

...But Possible

Having issued the 'trust no prediction' health warning, Mark went on to make a host of predictions about self-driving cars, jobs, education, democracy and healthcare. Self-driving cars, together with cheap power will make owning your own car economically unviable. You will hire cars from a taxi pool when you need them. You could call this idea 'CAAS - Cars As A Service' (like 'SAAS - Software As A Service') where all the pains of ownership are taken care of by somebody else.

AI and Robots will take all the boring cognitively light jobs leaving people to migrate to jobs involving emotions. (I'm slightly skeptical about this one also, because good therapeutic practices, for example, could easily end up within the scope of chatbots and robots with integrated chatbot sub-systems). Education is broken because it was designed for a 1950s world. It should be detached from politics because at the moment educational policy is based on the current Minister of Education's own life history. 'Education should be in the hands of educationalists' got an enthusiatic round of applause from the 300+ strong audience - well, it is Cambridge, after all.

Parliamentary democracy has hardly changed in 200 years. Take a Corbyn supporter and a May supporter (are there any left of either?). Mark contends that they will agree on 95% of day to day things. What politics does is 'divide us over things that aren't important'. Healthcare is dominated by the pharmaceutical industry that now primarily exists to make money. It currently spends twice the amount on marketing than it does on research and development. They are marketing, not drug companies.

While every company espouses innovation as one of its key values, for the most part it's just platitude or a sham. It's generally in the interest of a company or industry to maintain the status quo and persuade consumers to buy yet more useless products. Companies are more interested in delivering shareholder value than anything truly valuable.

Real innovation is about asking the right questions. Mark has a set of techniques for this and I am intrigued as to what they might be (because I do too!).

We can fix it - yes we can

On the positive side, it's just possible that if we put our minds to it, we can fix things. What is required is bottom up, diverse collaboration. What does that mean? It means devolving decision-making and budgeting to the lowest levels.

For example, while the big pharma companies see no profit in developing drugs for TB, the hugely complex problem of drug discovery can be tackled bottom up. By crowd-sourcing genome annotations, four new TB drugs have been discovered at a fraction of the cost the pharma industry would have spent on expensive labs and staff perks. While the value of this may not show on the balance sheet or even a nation's GDP, the value delivered to those people whose lives are saved is incalculable. This illustrates a fundamental flaw in modern capitalism - it concentrates wealth but does not necessarily result in the generation of true value. And the people are fed up with it.

Some technological solutions include 'Blockchain',that Mark describes as 'double entry bookkeeping on steroids'. Blockchain can deliver contracts that are trustworthy without the need for intermediary third parties (like banks, accountants and solicitors) to provide validation. Blockchain provides 'proof' at minuscule cost, eliminating transactional friction. Everything will work faster, better.

Organs can be 3D printed and 'Nanoscribing' will miniaturise components and make them ridiculously cheap. Provide a blood sample to your phone and the pharmacist will 3D print a personalised drug for you.

I enjoyed this talk, not least because it contained a lot of the stuff I've been banging on about for years (see: www.wellbeingandcontrol.com). The difference is that Mark has actually brought it all together into one simple coherent story - everything is broken but we can fix it. See Mark Stevenson's website at: https://markstevenson.org

Mark Stevenson
Mark Stevenson

– AI: Measures, Maps and Taxonomies

Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.

José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.

José Hernández-Orallo
José Hernández-Orallo

José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.

Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.

Intelligence not included
Intelligence not included

My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.

Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.

When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.

But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.

– What does it mean to be human?

John Wyatt is a doctor, author and research scientist. His concern is the ethical challenges that arise with technologies like artificial intelligence and robotics. On Tuesday this week (11th March 2019) he gave a talk called ‘What does it mean to be human?’ at the Wesley Methodist Church in Cambridge.

To a packed audience, he pointed out how interactions with artificial intelligence and robots will never be the same as the type of ‘I – you’ relationships that occur between people. He emphasised the important distinction between ‘beings that are born’ and ‘beings that are made’ and how this distinction will become increasingly blurred as our interactions with artificial intelligence become commonplace. We must be ever vigilant against the use of technology to dehumanise and manipulate.

I can see where this is going. The tendency for people to anthropomorphise is remarkably strong - ‘the computer won’t let me do that’, ‘the car has decided not to start this morning’. Research shows that we can even attribute intentions to animated geometrical shapes ‘chasing’ each other around a computer screen, let alone cartoons. Just how difficult is it going to be to not attribute the ‘human condition’ to a chatbot with an indistinguishably human voice or a realistically human robot. Children are already being taught to say ‘please’ and ‘thank you’ to devices like Alexa, Siri and Google Home – maybe a good thing in some ways, but …

One message I took away from this talk was a suggestion for a number of new human rights in this technological age. These are: (1) The right to cognitive liberty (to think whatever you want), (2) The right to mental privacy (without others knowing) (3) The right to mental integrity and (4) The right to psychological continuity - the last two concerning the preservation of ‘self’ and ‘identity’.

A second message was to consider which country was most likely to make advances in the ethics of artificial intelligence and robotics. His conclusion – the UK. That reassures me that I’m in the right place.

See more of John’s work, such as his essay ‘God, neuroscience and human identity’ at his website johnwyatt.com

John Wyatt

– Ethical AI

Writing about ethics in artificial intelligence and robotics can sometimes seem like it’s all doom and gloom. My last post for example covered two talks in Cambridge – one mentioning satellite monitoring and swarms of drones and the other going more deeply into surveillance capitalism where big companies (you know who) collect data about you and sell it on the behavioural futures market.

So it was really refreshing to go to a talk by Dr Danielle Belgrave at Microsoft Research in Cambridge last week that reflected a much more positive side to artificial intelligence ethics.  Danielle has spent the last 11 years researching the application of probabilistic modelling to the medical condition of asthma.  Using statistical techniques and machine learning approaches she has been able to differentiate between five more or less distinct conditions that are all labelled asthma.  Just as with cancer there may be a whole host of underlying conditions that are all given the same name but may in fact have different underlying causes and environmental triggers.

This is important because treating a set of conditions that may have family resemblance (as Wittgenstein would have put it) with the same intervention(s) might work in some cases, not work in others and actually do harm to some people. Where this is leading, is towards personalised medicine, where each individual and their circumstances are treated as unique.  This, in turn, potentially leads to the design of a uniquely configured set of interventions optimised for that individual.

The statistical techniques that Danielle uses, attempt to identify the underlying endotypes (sub-types of a condition) from set of phenotypes (the observable characteristics of an individual). Some conditions may manifest in very similar sets of symptoms while in fact they arise from quite different functional mechanisms.

Appearances can be deceptive and while two things can easily look the same, underneath they may in fact be quite different beasts. Labelling the appearance rather than the underlying mechanism can be misleading because it inclines us to assume that beast 1 and beast 2 are related when, in fact the only thing they have in common is how they appear.

It seems likely that for many years we have been administering some drugs thinking we are treating beast 1 when in fact some patients have beast 2, and that sometimes this does more harm than good. This view is supported by the common practice that getting the medication right in asthma, cancer, mental illness and many other conditions, is to try a few things until you find something that works.

But in the same way that, for example, it may be difficult to identify a person’s underlying intentions from the many things that they say (oops, perhaps I am deviating into politics here!), inferring underlying medical conditions from symptoms is not easy. In both cases you are trying to infer something that may be unobservable, complex and changing, from the things you can readily perceive.

We have come so far in just a few years. It was not long ago that some medical interventions were based on myth, guesswork and the unquestioned habits of deeply ingrained practices.  We are currently in a time when, through the use of randomly controlled trials, interventions approved for use are at least effective ‘on average’, so to speak. That is, if you apply them to large populations there is significant net benefit, and any obvious harms are known about and mitigated by identifying them as side-effects. We are about to enter an era where it becomes commonplace to personalise medicine to targeted sub-groups and individuals.

It’s not yet routine and easy, but with dedication, skill and persistence together with advances in statistical techniques and machine learning, all this is becoming possible. We must thank people like Dr Danielle Belgrave who have devoted their careers to making this progress.  I think most people would agree that teasing out the distinction between appearance and underlying mechanisms is both a generic and an uncontroversially ethical application of artificial intelligence.

Danielle Belgrave

Danielle Belgrave

– A Changing World: so, what’s to worry about?

A World that can change – before your eyes!

I’ve been to a couple of good talks in Cambridge (UK) this week. First, futurist Sophie Hackford (formally of Singularity University and Wired magazine) gave a fast-paced talk about a wide range of technologies that are shaping the future. If you don’t know about swarms of drones, low orbit satellite monitoring, neural in-plants, face recognition for payments, high speed trains and rocket transportation then you need to, fast. I haven’t found a video of this very recent talk yet, but the one below from a year ago gives a pretty good indication of why we need to think through the ethical issues.

YouTube Video, Tech Round-up of 2017 | Sophie Hackford | CTW 2017, January 2018, 26:36 minutes

The Age of Surveillance Capitalism

The second talk, in some ways, is even more scary. We are already aware that the likes of Google, Facebook and Amazon are closely watching our every move (and hearing our every breath). And now almost every other company that is afraid of being left behind is doing the same thing, But what data are they collecting and how are they using it. They use the data to predict our behaviour and sell it on the behavioural futures market. Not just our computer behaviour but they are also influencing us in the real world. For example, apparently Pokamon Go was an experiment originally dreamed up by Google to see if retailers would pay to host ‘monsters’ to increase footfall past their stores. The talk by Shoshana Zuboff was at the Cambridge University Law Faculty. Here is an interview she did on radio the same day.

BBC Radio 4, Start the Week, Who is Watching You?, Monday 4th February 2019, 42:00 minutes
https://www.bbc.co.uk/programmes/m0002b8l

– Ethical themes in artificial Intelligence and robotics

Useful categorisation of ethical themes

I was at the seminar the other day where I was fortunate enough to encounter Josephine Young from www.methods.co.uk (who mainly do public sector work in the UK).


Josie recently carried out an analysis of the main themes relating to ethics and AI that she found in a variety of sources related to this topic. I have reported these themes below with a few comments. 
Many thanks, Josie for this really useful and interesting work.



THEMES

(Numbers in brackets reflect the number of times this issue was identified).

Data

Data treatment

Data treatment, focus on bias identification (10)
Interrogate the data (9)

Data collection / Use of personal data

Keep data secure (3)
Personal privacy – access, manage and control of personal data (1, 5, 6)
Use data and tools which have the minimum intrusion necessary – privacy (3)
Transparency of data/meta data collection and usage (8)
Self-disclosure and changing the algorithm’s assumptions (10)

Data models

Awareness of bias in data and models (8)
Create robust data science models – quality, representation of demographics (3)
Practice understanding of accuracy – transparency (8)

robotethics.co.uk comment on data: Trying to structure this a little, the themes might be categorised into [1] data ownership and collection (who can collect what data, when and for what purpose), [2] data storage and security (how is the data securely stored and controlled without loss and any un-permitted access [3] data processing (what are the permitted operations on the data and the unbiased / reasonable inferences / models that can be derived from it) and [4] data usage (what applications and processes can use the data or any inferences made from it).


Impact

Safety – verifiable (1)
Anticipate the impacts the might arise – economic, social, environmental etc. (4)
Evaluate impact of algorithms in decision-making and publish the results (2)
Algorithms are rated on a risk scale based on impact on individual (2)
Act using these Responsible Innovation processes to influence the direction and trajectory of research (4)

robotethics.co.uk comment on impact: Impact is about assessing the positive and negative effects of AI in the future, whether that be in the short, medium or long term. There is also the question of who is impacted as it is quite possible that the impact of any particular AI product or service might impact one group of people positively and another negatively. Therefore a framework of effect x timescale x affected persons/group might make a start on providing some structure for assessing impact.


Purpose

Non-subversion – power conferred to AI should respect and improve social and civic processes (1)
Reflect on the purpose, motivations, implications and uncertainties this research may bring (4)
Ensure augmented – not just artificial – AI (8)
Purpose and ecology for the AI system (10)
Human control – choose how and whether to delegate decisions to AI (1)
Backwards compatibility and versioning (8)

robotethics.co.uk comment on purpose: Clearly the intent behind any AI development should be to confer a net benefit on the individual and/or the society generally. The intent should never be to cause harm – even drone warfare is, in principle, justified in terms of conferring a clear net benefit. But this again raises the question of net benefit to whom exactly, how large that benefit is when compared to any downside, and how certain it is that the benefit will materialise (without any unanticipated harmful consequences). It is a matter of how strong and certain the argument is for justifying the intent behind building or deploying a particular AI product or service.


Transparency

Transparency for how AI systems make decisions (7)
Be as open and accountable as possible – provide explanations, recourse, accountability (3)
Failure transparency (1)
Responsibility and accountability for explaining how AI systems work (7)
Awareness and plan for audit train (8)
Publish details describing the data used to train AI, with assumptions and risk assessment – including bias (2)
A list of inputs used by an algorithm to make a decision should be published (2)
Every algorithm should be accompanied with a description of function, objective and intended impact (2)
Every algorithm should have an identical sandbox version for auditing (2)

robotethics.co.uk comment on transparency: Transparency and accountability are closely related but can be separated out. Transparency is about determining how or why (e.g. how or why an AI made a certain decision) whereas accountability is about determining who is responsible. Having transparency may well help in establishing accountability but they are different. The problem for AI is that, by normal human standards, responsibility resides with the autonomous decision-making agent so long as they are regarded as having ‘capacity’ (e.g. they are not a child or insane) and even then, there can be mitigating circumstances (provocation, self-defence etc.). We are a long way from regarding AIs as having ‘capacity’ in the sense of being able to make their own ethical judgements, so in the short to medium term, the accountability must be traceable to a human, or other corporate, agent. The issue of accountability is further complicated in cases where people and AIs are cooperatively engaged in the same task, since there is human involvement in both the design of the AI and its operational use.


Civic rights

A named member of staff is formally responsible for the algorithm’s actions and decisions (2)
Judicial transparency – auditible by humans (1)
3rd parties that run algorithms on behalf of public sector should be subject to same principles as government algorithms (2)
Intelligibility and fairness (6)
Dedicated insurance scheme, to provide compensation if negative impact (2)
Citizens must be informed when their treatment has been decided/informed by an algorithm (2)
Liberty and privacy – use of personal data should not, or not be perceived to curtail personal liberities (1)
Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems (7)

robotethics.co.uk comment on civic rights: It seems clear that an AI should have no more license to contravene a person’s civil liberties or human rights than another person or corporate entity would. Definitions of human rights are not always clear-cut and differ from place to place. In human society this is dealt with by defaulting to local laws and cultural norms. It seems likely that a care robot made in Japan but operating in, say, the UK would have to operate according to the local laws, as would apply to any other person, product or service.


Highest purpose of AI

Shared prosperity – economic prosperity shared broadly to benefit all of humanity (1)
Flourishing alongside AI (6)
Prioritise the maximum benefit to humanity and the natural environment (7)
Shared benefit – technology should benefit and empower as many people as possible (1)
Purpose of AI should be human flourishing (1)
AI should be developed for the common good (6)
Beneficial intelligence (1)
Compatible with human dignity, rights, freedoms and cultural diversity (1, 5)
Align values and goals with human values (1)
AI will prevent harm (5)
Start with clear user need and public benefit (3)
Embody highest ideals of human rights (7)

robotethics.co.uk comment on the higher purpose of AI: This seems to address themes of human flourishing, equality, values and again touches on rights. It focuses mainly on, and details, the potential benefits and how these are distributed. These can be slotted into the frameworks already set out above.


Negative consequences / Crossing the ‘line’

An AI arms race should be permitted (1)
Identify and address cybersecurity risks (8)
Confronting the power to destroy (6)

robotethics.co.uk comment on the negative consequences of AI: The main threats are set out to be in relation to weapons, cyber-security and the existential risks posed by AIs that cease to be controlled by human agency. There are also many more subtle and shorter term risks such as bias in models and decision making addressed elsewhere. As with benefits, these can be slotted into the frameworks already set out above.


User

Consider the marginal user (9)
Collaborate with humans – rather than displace them (5)
Marginal user and participation (10)
Address job displacement implications (8)

robotethics.co.uk comment on user: This is mainly about the social implications of AI and the risks to individuals in relation to jobs and becoming marginalised. These implications seem likely to arise in the short to medium term and given their potential scale, there seems a comparative paucity of attention being paid to them by governments, especially in the UK where Brexit dominates the political agenda. Little attempt seems to be being made to consider the significance of AI in relation to the more habitual political concerns of migration and trade.


AI Industry

AI researchers <-> policymakers (1)
Establish industry partnerships (9)
Responsibility of designers and builders for moral implications (1, 5)
Establish industry partnerships (9)
Culture of trust and transparency between researchers and developers (1)
Resist the ‘race’ – no more ‘move fast and break things’ mentality (1)

robotethics.co.uk comment on AI industry: The industry players that are building AI products and services have a pivotal role to play in their ethical development and deployment. In addition to design and manufacture, this affects education and training, regulation and monitoring of the development of AI systems, their marketing and constraints on their use. AI is likely to be used throughout the supply chain of other products and services and AI components will become increasingly integrated with each other into more and more powerful systems. The need to create policy, regulate, certify, train and licence the industry creating AI products and services needs to be addressed more urgently given the pace of technological development.


Public dialogue

Engage – opening up such work to broader deliberation in an inclusive way (4)
Education and awareness of public (7)
Be alert to public perceptions (3)

robotethics.co.uk comment on public dialogue: At present, public debate on AI is often focussed on the activities of the big players and their high profile products such as Amazon Echo, Google Home, and Apple’s Siri. These give clues as to some of the ethical issues that require public attention, but there is a lot more AI development going on in the background. Given the potentially large and fast pace of societal impacts of AI, there needs to be greater public awareness and debate, not least so that society can be prepared and adjust other systems (such as taxation, benefits, universal income etc.) to absorb the impacts.


Interface design

Representation of AI system, user interface design (10)

robotethics.co.uk comment on interface design: With AIs capable of machine learning they are developing knowledge and skills in similar ways to how people do, and just like people, they often cannot explain how they do things or arrive at some judgement or decision. The ways in which people and AIs will interface and interact is as complex a topic as how people interact with each other. Can we ever know what another person is really thinking or whether the image they present of themselves is accurate. If AIs become even half as complex as people, able to integrate knowledge and skills from many different sources, able to express (if not actually feel) emotions, able to reason with super-human logic, able to communicate instantaneously with other AIs, there is no knowing how people and AIs will ‘interface’. Just as with computers that have become both tools for people to use and constraints on human activity (‘I’m sorry but the computer will not let me do that’) the relationships will be complex, especially as computer components become implanted in the human body and not just carried on the wrist. It seems more likely that the relationship will be cooperative rather than competitive or one in which AIs come to dominate.


The original source material from Josie, (who gave me permission to reference this material) can be found at:

https://docs.google.com/document/d/1LrBk-LOEu4LwnyUg8i5oN3ZKjl55aDpL6l1BxVcHIi8/edit


See other work by Josie Young: https://methods.co.uk/blog/different-ai-terms-actually-mean/

– IEEE Consultation on Ethically Aligned Design

A Response Submitted for robotethics.co.uk

A summary of the IEEE document Ethically Aligned Design (Version 2) can be found below. Responses to this document were invited by 7th May 2018.


Response to Ethically Aligned Design Version 2 (EADv2)
Rod Rivers, Socio-Technical Systems, Cambridge, UK
March 2018 (rod.rivers@ieee.org)

I take a perspective from philosophy, phenomenology and psychology and attempt to inject thoughts from these disciplines.

Social Sciences: EADv2 would benefit from more input from the social sciences. Many of the concepts discussed (e.g. norms, rights, obligations, wellbeing, values, affect, responsibility) have been extensively investigated and analysed within the social sciences (psychology, social psychology, sociology, anthropology, economics etc.). This knowledge could be more fully integrated into EAD. For example, the meaning of ‘development’ to refer to ‘child development’ or ‘moral development’ is not in the glossary.

Human Operating System: The first sentence in EADv2 establishes a perspective looking forward from the present, as use and impact of A/ISs ‘become pervasive’. An additional tack would be to look in more depth at human capability and human ethical self-regulation, and then ‘work backwards’ to fill the gap between current artificial A/IS capability and that of people. I refer to this as the ‘Human Operating System’ (HOS) approach, and suggest that EAD makes explicit, and endorses, exploration of the HOS approach to better appreciate the complexity (and deficiencies) of human cognitive, emotional, physiological and behavioural functions.

Phenomenology: A/ISs can be distinguished from other artefacts because they have the potential to reflect and reason, not just on their own computational processes, but also on the behaviours, and cognitive processes of people. This is what psychologists refer to as ‘theory of mind’ – the capability to reason and speculate on the states of knowledge and intentions of others. Theory of mind can be addressed using a phenomenological approach that attempts to describe, understand and explain from the fully integrated subjective perspective of the agent. Traditional engineering and scientific approaches tend to objectify, separate out elements into component parts, and understand parts in isolation before addressing their integration. I suggest that EAD includes and endorses exploration of a phenomenological approach to complement the engineering approach.

Ontology, epistemology and belief: EADv2 includes the statement “We can assume that lying and deception will be prohibited actions in many contexts” (EADv2 p.45). This example may indicate the danger of slipping into an absolutist approach to the concept of ‘truth’. For example, it is easy to assume that there is only one truth and that the sensory representations, data and results of information processing by an A/IS necessarily constitute an objective ‘truth’. Post-modern constructivist thinking see ‘truth’ as an attribute of the agent (albeit constrained by an objective reality) rather than as an attribute of states of the world. The validity of a proposition is often re-defined in real time as the intentions of agents change. It is important to establish some clarity over these types of epistemological issues, not least in the realm of ethical judgments. I suggest that EAD note and encourage greater consideration of these epistemological issues.

Embodiment, empathy and vulnerability: It has been argued that ethical judgements are rooted in physiological states (e.g. emotional reactions to events), empathy and the experience of vulnerability (i.e. exposure to pain and suffering). EADv2 does not currently explicitly set out how ethical judgements can be made by an A/IS in the absence of these human subjective states. Although EAD mentions emotions and affective computing (and an affective computing committee) this is almost always in relation to human emotions. The more philosophical question of judgement without physical embodiment, physiological states, emotions, and a subjective understanding of vulnerability is not addressed.

Terminology / Language / Glossary: In considering ethics we are moving from amoral mechanistic understanding of cause and effect to value-laden, intention driven notions of causality. This requires inclusion of more mentalistic terminology. The glossary should reflect this and could form the basis of a language for the expression of ideas that transcend both artificial and human intelligent systems (i.e. that is substrate independent). In a fuller response, I discuss terms already used in EADv2 (e.g. autonomous, intelligent, system, ethics, intention formation, independent reasoning, learning, decision-making, principles, norms etc.), and terms that are either not used or might be elaborated (e.g. umwelt, ontology, epistemology, similarity, truth-value, belief, decision, intention, justification, mind, power, the will).



IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems

For Public Discussion – By 7th May 2018 (consultation now closed)

Version 2 of this report is available by registering at:
http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html

Public comment on version 1 of this document was invited by March 2017 to encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. The document was created by committees of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, comprised of over one hundred global thought leaders and experts in artificial intelligence, ethics, and related issues.

Version 2 presents the following principles/recommendations:

Candidate Recommendation 1 – Human Rights
To best honor human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.

Candidate Recommendation 2 – Prioritizing Wellbeing
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.

Candidate Recommendation 3 – Accountability
To best address issues of responsibility and accountability:
1. Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
3. Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/owners of A/IS should register key, high-level parameters, including:

• Intended use
• Training data/training environment (if applicable)
• Sensors/real world data sources
• Algorithms
• Process graphs
• Model features (at various levels)
• User interfaces
• Actuators/outputs
• Optimization goal/loss function/reward function

Standard Reference for Version 2
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.

Report, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017, December 2017, 136 pages

http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html

– What’s your position? The place of positioning theory within AI development

Positioning theory illuminates our understanding of rights, duties, expectations and vulnerabilities. It addresses the dynamics of power and control and is a potent tool for understanding the self, the individual in the context of others, relationships, and social institutions. It even transcends the distinction between people and objects and has profound implications for the development of artificial intelligence (AI).

Positioning and technology

It is already becoming apparent that any computer algorithm (whether or not it is based on AI) is not neutral with respect to position. An algorithm that scores my credit worthiness, for example, can have significant impact on my life even though it may be only using a small sample of indicators in making its judgment. These, for example, might include debts that I dispute and might exclude a long-term history of credit and trustworthiness. The algorithm takes its position from a particular set of indicators that constitutes ‘its world’ of understanding. However I might easily question that it has used a biased training set or is not looking at the right things, or that it is quite likely using that information in a misleading way. And like any set of metrics, they can be manipulated once you know the algorithm.

There are algorithms that are explicitly programmed into the software on various decision-making systems but when it comes to more advanced technology based on machine learning, it is also already apparent that we are building into our artificially intelligent devices all kinds of default positions without even realizing it. So, if an AI programme selects staff for interview on the basis of data across which it has run its machine learning algorithms, it will simply replicate biases that are deeply entrenched but that go unquestioned. For example it might build in biases against gender, race or many other factors that we might call into question if they were explicit.

Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes

As we develop artificial intelligences in all sorts of situations and in many different manifestations from credit rating algorithms to robots we can easily embed positions that that cause harm. Sometimes this will be unwittingly and sometimes it will be deliberate.

Where do you stand?

Are you sitting down? Maybe you are in London, or Paris or Malaga. And maybe it’s 4pm on Saturday 11th November 2017 where you are. So, that locates you (or rather me) in place and time. And in exactly the same way, you can also be ‘positioned’ with respect to your attitudes and opinions. Are you to ‘the right’ or to ‘the left’, for example.

Positioning theory can help you understand where you are, and it’s not just ‘left’ or ‘right’. Pretty well every word you say and every action you take, creates a ‘position’. Read on to see how you cannot avoid taking positions and how positions confer rights and responsibilities on you and others, reveal vulnerabilities and determine the power relationships between us. Even objects, both natural and the ones we create have positions, both in the sense of where they are located, but also in the way they affect your actions. Re-thinking the world from the point of view of positioning theory can be a revelation.

Part of the appeal of positioning theory is that it is easy to understand, and it is easy to understand because it builds on a basic psychological process that we use all the time. This is the process of navigating around a space.

Youtube Video, Spatial Navigation – Neil Burgess, Serious Science, December 2016, 12:41 minutes

Positioning theory can be applied to all sorts of things. It can be used between individuals to help understand each other and resolve differences. It can be used in organisations to help effect organisational change. It can be used by therapists to help families understand and adjust the way they think about the main influences in their lives, and help alter their circumstances. It can be used in international relations to help nations and cultures understand each other and resolve their differences. It can also be used manipulatively to sell you things you didn’t want and to restrict your freedom, even without you being consciously aware of it.

In one sense, positioning theory is such a simple idea that it can seem obvious. It can be thought of as ‘the position you take on a particular issue’. For example, you may take the position on animal rights, that an animal has the same right to live as a person. But positions need not be so grand or political. You might take the position that two sugars are too many to have in tea or that it’s better not to walk on the cracks between stones on the pavement. Even ascribing attributes to people or objects is to take a position. So to say that somebody is ‘kind’ or ‘annoyed’ is to take a position about how to interpret their behaviour.

What is common to positions is that they derive from some kind of evaluation within a context of beliefs and they can influence action (or at least the propensity for action). Label someone as ‘violent’ or ‘stupid’, for example, and you may easily affect other people’s behaviours with respect to that person, as well as your own. And, of course, if that person is aware of the label, they may well live up to it.

Despite being a simple concept, positions are so pervasive and integrated into thought, language, dialogue, actions and everyday life that it is only relatively recently (i.e. in the post-modern era from about the mid 20th century onwards) that ‘positioning theory’ has emerged and ‘positions’ have become identified as having explicit ‘identities’ in their own right. However, this understanding of positioning has had impact all the way across the social sciences.

Youtube Video, Rom Harré Positioning Theory Symposium Bruges, July 2015, 1:06:27 hours

If I take the position that ‘people should not be allowed to carry guns’, I am placing myself at a particular location on a line or scale, the other end of which is that ‘people should be allowed to carry guns’. The extremes of this scale might be ‘people should never under any circumstances be allowed to carry guns’ and ‘people should always under all circumstances be allowed to carry guns’.

Should not be allowed – – – – – – – – – – – – – – – – – Should be allowed
. . . . . . . . . . . . . . . . . . . . . . .^. . . . . . . . . . . . . .
. . . . . . . . . . . . A possible position . . . . . . . . . . . . . . .

Once you start to assess particular circumstances, then you are taking intermediate points along the scale. Thinking of positions as lines, or scales, along which you can locate yourself and others, and potentially travel from place to place, helps everybody understand where they are and where others are coming from.

Positioning the self

It can be argued that the notion of ‘the self’ is no more than the set of positions you take up, and that these positions define your identity in a psychological sense. You can imagine a long list of attitude scales with your position marked on each, and that the overall profile (where you are located on each scale) defines your unique identity. However, it’s not so simple. You are more than your attitudes. Your physical characteristics, your genetics, your background, your memories and experiences, your skills etc., will all influence your attitudes, but are distinct from them. Also, your attitudes are not fixed. They change over time and they change in response to circumstances.

Roles, the self, and identity

There is another closely related sense of ‘position’ that goes beyond your own opinions on particular issues. This is how others (and you) position yourself in society. This is looking at you from the outside in rather than the inside out.

A position is not just where you are located on some dimension but it also implies a set of rights, responsibilities and expectations. A child is in a different position to an adult. A doctor is in a different position to a teacher or a taxi driver. We have different expectation of them, accord them different privileges and hold them to account according to different criteria.

Work roles are often known as ‘positions’. You apply for a position. Roles, like that of teacher, policeman, nurse, supervisor, colleague, judge or dogsbody can be seen as sets of rights, duties and expectations that are externally validated – i.e. that are commonly agreed amongst people other than (and including) the occupier of the role. Roles like parent and friend also confer rights and responsibilities. When you position somebody as ‘a friend’ you confer on them the right to know more about you than other people do and the duty to act in your best interests.

BBC Radio 4, In Our Time: Friendship, March 2006, 41:42 minutes
http://www.bbc.co.uk/programmes/p003hyd3

Our relationships with different people position us in different ways. To one person we may be a pupil while to another a teacher, to one a benefactor while to another a dependent, to one a comedian while to another deadly serious. As we move in and out of different social situations these different facets to our identities come to the fore or recede. It is as if our bodies contain multiple versions of the self, each triggered by particular circumstances or situations.

When you position your own rights and duties you are helping to define your ‘self’. If you believe that it is your duty to help the homeless or volunteer for unpaid jobs in the community, you are defining yourself as a particular type of public-spirited person. If you believe that you have the ‘right’ to use fear and violence to control others and justify this in terms of your duty to your country then, like Hitler or Assad, you are again defining your ‘self’.

We can distinguish between the list of rights and duties that are self-defined and part is that defined by others. In childhood your role is largely defined by others – in the family, in school etc. As you move into adulthood you increasingly define your own positions. In principle, you have control over how you define yourself, but in practice it is very hard to do this independently of the expectations of others and how they position you. Significant mismatches between your own definitions, and those of others, creates tension and even more significant stresses occur when there is a mismatch between your own perceptions of yourself and what you feel they ought to be.

One or many selves?

Our naïve assumption is that we have just one identity in the same way as we have only one body (and even that is constantly being renewed such that ever cell in our bodies may be different from what it was a few years before). In fact, it is difficult to identify what remains constant over a lifetime.

Youtube Video, Personal Identity: Crash Course Philosophy #19, CrashCourse, June 2016, 8:32 minutes

But just as we can have multiple roles we simultaneously maintain multiple identities. You may, for example, find yourself carrying on some internal debate (an inner dialogue) when making a decision. Take a difficult decision, like buying a car, moving house or making a career move. It is as if each ‘self’ is arguing the case for each option. It adopts a particular position (or set of positions) then loosely keeps track of it’s pre-requisites and implications. It can then engage in dialogue with, and be influenced by, other positions

A: I want the red sports car
B: It’ll be expensive to run
A: But it would be worth it if …
C: What kind of fool are you wanting a sports car at your age

This suggests that, not only do you change in response to circumstances, constantly re-configuring your positions to adjust to various pressures and concerns, but opens the possibility that you are made up on many different ‘selves’ all vying to have their voices heard.

Youtube Video, What Causes The Voice In Your Head?, Thoughty2, August 2015, 6:57 minutes

And even those selves are not simply stored on the shelf waiting to be activated, but are constructed on the fly to meet the needs of a constant stream of changing circumstances and discussion with the other ‘selves’.

BBC Radio 4, The Human Zoo: The Improvising Mind, June 2015, 27:37 minutes
http://www.bbc.co.uk/programmes/b0608dvw

These selves are normally related to each other. They may be constructed from the same ‘materials’ but they can be seen as distinct and to make up the ‘society of minds’ as discussed in the blog posting ‘Policy regulates behaviour’ in the section ‘who is shouting the loudest’.

Maintaining consistency of the self

Normally we seek to be consistent across our positions and this provides some stability. We strive to be internally consistent in our own view of the world and form small systems of belief that are mutually self-supporting.

Some systems of belief are easy to maintain because they are closely tied to our observations about reality. If I believe it’s night-time then I will expect it to be dark-outside, the clock to read a certain time, certain people to be present or absent, particular programmes to be on TV or radio and so on. This belief system is easy to maintain because we can expect all the observable evidence to point in much the same direction.

Beliefs about the self, by contrast are rather more fragile but nevertheless still require consistency.

Without consistency there is no stability and without stability there is unpredictability and chaos.

You need to know where you are – your position, in order to function effectively and achieve your intentions (See Knowledge is Power to Control). Maintaining a consistent model of yourself is, therefore, something of a priority. This is why we spend a good deal of or mental energy spotting inconsistencies and anomalies in the positions we take and finding ways of correcting them.

Much of what drives us as individuals is the mismatch between how we position ourselves and how we believe others, and ourselves, position us in terms of our duties, rights and expectations. We are constantly monitoring and evaluating how our own feelings, thoughts and behaviours align, and how these align with what we believe other people feel, think and behave in relation to us. If somebody unexpectedly slights us (or gives us an unexpected gift) we cannot help looking for an explanation – i.e. aligning our own belief system with what we believe others are doing and thinking. This is not necessarily to say that we are very good at getting it right and there are a whole host of ways in which we achieve alignment on spurious grounds. These are cognitive biases and their discussion underpins much of what is written in these blog postings.

Youtube Video, Identity and Positioning Theory, rx scabin, January 2013, 7:56 minutes

Re-writing the self

The world is not a totally predictable place, and more so the people we encounter in our lives. As a consequence, we are constantly creating and re-writing the story-line of our own lives in the light of changes in how we position ourselves with respect to others, and how we want or feel we ought to be positioned (see ‘The Story of Your Life’).

Although inconsistency tends to create tension and a drive to minimise it, this is something of a thankless and never-ending task. However much we work at it, there is always a new interpretation that seems to be more satisfactory if we care to look for it. Either new ‘evidence’ appears that we need to account for or we may see a new way of looking at things that makes more sense than a previous interpretation.

In a classic experiment in psychology (Festinger et al 1959) students were given either $1 or $20 to lie to other students about how interesting a task was. They were then asked about their own attitude to the task. Contrary to what you might expect, the students who were paid only $1 to lie, had a more positive attitude to the task. Festinger explains this in terms of maintaining consistency between the lie and ones own attitude when not receiving sufficient payment to lie.

Youtube Video, Testing cognitive consistency, PSYC1030x Series – Introduction to Developmental, Social & Clinical Psychology, April 2017, 3:28 minutes

The blog post called ‘It’s like this’ introduced the work of the psychologists George Kelly who set out the theory of personal constructs. Kelly uses the notion of constructs to explain how people develop the way in which they ‘see’ the world. Kelly’s personal construct theory provides a more common-sense and accessible way of understanding some of the ideas of ‘constructivism’ than some of the later, more obscure post-modernist accounts based in linguistics. George Kelly developed his theory of ‘personal constructs’ way back in 1955 and explores this view of people as ‘personal scientists’, constantly trying to make sense of the world through observation, theory and experiment.

Youtube Video, PCP film 1 Personal Construct Psychology and Qualitative Grids, Harry Procter, September 2014, 28:53 minutes

In a volatile, uncertain, complex and ambiguous (VUCA) world (see: https://www.tothepointatwork.com/article/vuca-world/ ) it is impossible to keep up with aligning ones own positions with what we experience. This is true on the macro scale (the world at large) and in respect of every small detail (i.e. ‘positioning’ what even one other person thinks of you at any given moment).

Thankfully, to some extent, there is stability. Much of the world stays much the same from moment to moment and even from year to year. Stability means predictability, and when there is predictability we formulate routines and habits of thinking (e.g. Khaneman’s system 1 thinking). Routines of thought and behaviour require little mental effort to maintain. We are often reluctant to move away from established habits because apart from providing some degree of security and predictability, it takes effort to change. In the same way, there is inertia to changing ones position on some issue and we will tend to defend it, even in the face of strong evidence to the contrary.

This is particularly true in relation to the ‘self’. We don’t like to admit we are wrong to others or to ourselves. We don’t like to be accused of being inconsistent. The confirmation bias is particularly strong leading us to seek evidence in support of our view and ignore evidence that does not fit. If something happens that forces us to change our world view – we lose our job, a relationship ends, or we lose a court case – then the consequences can cause a great deal of psychological pain as we are forced to change positions on a wide range of issues, particularly those related to our own self-perception and evaluation of our self-worth.

Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011,4:31 minutes

Where there isn’t stability, fortunately we can live with a great deal of ambiguity and uncertainty. Our tolerance of ambiguity and uncertainty enables us to hold some quite inconsistent, anomalous or hypocritical positions without being unduly concerns and indeed often without even being particularly aware of it. This is partly because our positions are not necessarily fixed or even known.

We can easily construct new positions, especially when we are trying to justify an emotional reaction to something. This is another example of trying to minimise dissonance and inconsistency but this time the difference we seek to minimise is between our emotional reaction and our reasoning mind. We have similar problems keeping our behaviours consistent with our emotions and thoughts.

We can construct positions ‘on the fly’ to suit circumstances and support the courses of action we want to take. We can post-rationalise to support courses of action we have taken in the past. We can toy with positions and say ‘what would it be like if’ I took such and such a position, just to see how we feel about it. In fact, each one of our multiple selves, so to speak, can construct quite elaborate scenarios in our ‘minds eye(s)’, together with related thoughts, plans and even feelings.

It is in the nature of the human condition that we ‘duck and dive’ and those that are best able to duck and dive tend to be those that can achieve their goals most successfully. Tolerance of ambiguity, and the ability to quickly evaluate and adjust to new circumstances and interpretations, is a great virtue from the point of view of survival.

Furthermore, our realities are socially constructed. We learn from others what to note and what to ignore. Our friends, families, organisations, the media and culture shape what we perceive, interpret and how we act in response. It is our social environment that largely determines the ‘positions’ that we see as being available and the ones that we choose for ourselves.

YouTube Video, positioning theory in systemic family therapy, CMNtraining, July 2015, 33:10 minutes

Positioning and epistemology

Although we strive for consistency between our beliefs and what we take to be external reality, in fact the relationship can be quire tenuous. You might, for example, take the position that you are a good driver. Then, one day you have an accident, but instead of changing your belief about yourself, you believe that it was the fault of another driver. A second accident is blamed on the weather. A third is put down to poor lighting on the road, and a fourth to road-works. You have now built up a mini system of beliefs about the factors that lead to accidents that could, more than likely and more easily, be explained by you being a poor driver. It is not until one day you are charged with careless driving, that you finally have to revise this somewhat fragile belief system.

Belief systems can be a bit like bubbles. They can grow larger as more supporting beliefs are brought in to sustain an original position. Then other beliefs must prop up the supporting beliefs and so on. If the original position had no support in reality the whole system will be fragile, eventually become unsustainable and burst leaving nothing in its place.

In contrast, belief systems that have a stronger foundation do not need propping up with other fragile positions. Each position can stand-alone and therefore can re-enforce others it is related to. This results in a stable belief system with mutually re-enforcing positions and even if one falls away, the rest of the structure will still stand.

Having said that, no belief system is invulnerable. Even the most ‘solid’ of belief systems such as Einsteinian physics came under attack from quantum mechanics. The way in which science progresses is not by gradual accumulation of ‘facts’ but, as Kuhn observed in 1963, is more to do with the eventual replacement of established paradigms by new ones, only once the old paradigm becomes completely unsustainable. This model probably also applies to the belief systems of individuals where established systems are clung to for as long as can be sustained.

As we move towards the development of artificial intelligence and robots it will be increasingly necessary to understand the logic and mathematics of belief systems.

Noah Friedkin (2017) has developed a mathematical model of how belief systems change in response to new evidence and illustrates this with how beliefs change amongst the US public as new evidence emerged in relation to the Iraq war. http://www.uvm.edu/~cdanfort/csc-reading-group/friedkin-science-2016.pdf

How will robots build their belief systems and change them in the light of evidence? This is one of the issues examined at www.robotethics.co.uk .

The naive theory of knowledge is that our knowledge and perceptions are simply a mirror of reality. However, the more we think about it the more we understand that what we take to be reality is very much a function of how we perceive and interpret it.

Some animals, for example, can detect frequencies of sound and light that people cannot detect – they literally live in a different world. In the same way, one person may ‘see’ or ‘interpret’ what they perceive very differently from another person. They are ‘sensitive’ to quite different features of the world. A skilled football player may see a foul or an offside, while to a non-player ignorant of the rules just sees the players aimlessly kicking a ball around. Also, we are highly selective in what we attend to. When we look at a clock, we see the time, but afterwards often cannot say whether the clock had numbers or marks, let alone the colour of the clock-face. In the post-modern view of the world, reality is ‘constructed’ from the meaning we actively seek and project onto it, rather than what we passively receive or record.

Positioning in relationships

The term ‘relationship’ can include personal relationships, work relationships, relationships between organisations, relationships between the citizen and the state, relationships between countries and many more. It can be easier to think in terms of personal relationships first and then go on to apply the principles to other types of relationship.

Relationships reveal an important characteristic of positioning. If I take one position, I may necessarily force you into another, whether you like it or not. So, if I take the position that you are ‘lazy’, for example, then you either have to agree or challenge that position. One way or another, ‘laziness’ has become a construct within our relationship and it is quite difficult to dismiss or ignore it, especially if you are frequently being labelled as ‘lazy’ at every opportunity. It is quite difficult to live with another person’s positioning of you that you do not agree with yourself. It is a kind of assault on your own judgement. We not only seek internal consistency but also consistency with others’ perceptions. Any dissonance or discrepancy will create some tension that motivates a desire to resolve it.

There is also a more subtle form of positioning. This is not necessarily positioning with respect to a particular issue but a more general sense in which you relate to a particular person, society in general, a job, or indeed more or less anything you care to think about. Are you close to or distant from it; behind it or ahead or it; on top of it or is it on top of you? Here positioning is being used as a metaphor for how you generally relate to something.

Youtube Video, Relationship Position – Metaphors with Andrew T. Austin, Andrew T. Austin, July 2012, 13:34 minutes

Another interesting idea, related to spatial positioning in relationships, is Lewin’s Force/Field Theory in which the forces ‘for’ and ‘again’ some change are assessed. If, for example, in some relationship there are some attracting forces and some repelling forces (an approach/avoidance conflict) then a party to the relationship may find some optimal distance between them where the forces balance. If the other party has a different optimal distance at any point in time, then we leave Lewin’s theory and are into negotiation.

Youtube Video, Lewin, headlessprofessor, November 2015, 5:35 minutes

Negotiation

In a relationship, one way of resolving a discrepancy is to argue the case. So I may argue that I am not lazy and support this by evidence – ‘look at all the things I do…’. Another way is to ‘live up to’ the positioning. So, if you describe me as ‘kind’, I may start to exhibit kinder behaviours, and if you position me as ‘lazy’ I may be more likely to stay on the couch. Both ‘arguing the case’ and ‘living up to’ can be seen as an attempt to resolve a positioning discrepancy – to seek consistency and to simplify.

However, people being intelligent as they are, can often predict or at least guess at each others positions, and can then use this knowledge to alter their own actual or declared positions. A positive use of this would be to act in a way that supports or cooperates with another person’s position. So, if I can guess that you would not want to go out to see some of our friends tonight, to save time or argument I might say that I don’t want to go out either, even if I would want to.

Sometimes this will involve negotiation. We may not want the same things but we may well be prepared to ‘trade’. A good trade is where both parties can change their position on something that costs little to them but gives a lot of value to the other. I might say ‘if we go out we can get a take-away’ on the way back, knowing that this will make the proposition more attractive to you.

Alternatively, I might exaggerate the extent to which I think going would be a good thing, knowing that we might then settle on going for a short time (which is what I really want). However, once we get into this type of ‘hidden’ positioning everything starts to get more complicated as we both try to out-guess what the other really wants. Trades are made considerably easier if both parties trust each other to be honest about their own costs and values. Negotiation will start to get difficult as soon as one or both parties hide information about cost and value in an attempt to seek advantage (e.g. by pretending that something is of no value to them when it is).

It is useful to distinguish between one’s ‘position’ and one’s ‘interest’. One’s position in a negotiation is what you say publicly to the other party, whereas one’s interest is often hidden. One’s interest can be thought about as the reason you hold a particular position. This reason may be ‘because I want to get the most for myself out of this transaction as possible’, and we often (sometimes unjustifiably) attribute this interest to the other party. But as often as not the reason may be quite different. It might even be that the other party wants you to gain as much as possible out of the transaction, but your natural suspicion precludes you seeing their genuine motive. Equally, the other person’s interest might be nothing to do with how much each gets out of it. You may not want to go out because it’s cold outside. If this ‘interest’ is revealed it can open up new solutions – ‘we can take the car’ (when the default assumption was that we would walk).

Youtube video, Interests and Positions in Negotiation – Noam Ebner with Vanessa Seyman, Noam Ebner, February 2015, 15:03 minutes

Although it is possible to hold hidden positions to seek advantage or manipulatively, much of what goes on in negotiation is more to do with understanding our own and an other’s interests. A lot of the time we only have vague ideas about where we stand in our interests and positions. We have even less information about where somebody else stands. We need to test possible positions as they apply to particular circumstances before we can make up our minds. I need to say ‘let’s go out tonight’ before I know how either you are I will actually feel about it, and as we debate the various factors that will influence the decision we may both sway around from one position to another, also taking the other’s reactions and uncertain interests and position into account, before settling on our own position, let alone a mutual one.

However, through the informal use of positioning theory in everyday life, we can identify and make explicit what the various dimensions are. We can reveal where each party places themselves and the other on these dimensions and where the differences lie. This takes an important step towards arriving at decisions that we can agree on.

Positioning, power and control

The rights and responsibilities that we confer on each other, and accept for ourselves, determine the power relationships between us. Studies of power amongst college students in the US suggest that power is granted to individuals by others rather than grabbed. Certain people are positioned by others to rise in the social hierarchy because they are seen to benefit a social group.

Youtube video, Social Science & Power Dynamics | UC Berkeley Executive Education, berkeleyexeced, May 2016, 3:43 minutes

Donald Trumps rise to power can be read within this framework as power granted by the manoeuvrings of the republican party in its candidate selection process and the growing group of economically disenfranchised workers in the US. Similarly, the rise to prominence of the UKIP party in the UK can be read as having followed a similar pattern.

In the power relationships between individuals, often very little is spelt out, and rights and duties between individuals can be in constant flux. In principle it is possible to formalise the positions of the parties in a relationship in a contract. Marriage is a high level contract, the terms of which have been ‘normalised’ by society, mainly in the interest of maintaining the stability and hence the predictability of the social structure. Many of the detailed terms are left undefined and are themselves a matter for negotiation, as the need arises, such that the structures holding the relationship between two individuals together can flex a great deal. Many of the terms are implied by social convention within the immediate culture and circumstances of the parties. Some terms may be explicitly discussed and negotiated, especially when one party feels there has been a breach on the part of the other party. As people and circumstances change, terms may be re-negotiated. It may take major, repeated or pro-longed imbalances or breaches of implied terms to break the ‘contract’.

For example, if your position is that I have a duty to supply you with your dinner when you come home from work, and I accept that you have a right to that position, then we have established a power relationship between us. If I do not cook your dinner one evening then your right has been breached and you may take the position that you have a right of redress. Perhaps it has created an obligation that I should do something that provides an equivalent value to you, and in that sense, I am in your debt. Or perhaps it gives you the right to complain. Alternatively, I may take the position that while you have that right in general, I have the right to a night off once in a while, and then we may be into a negotiation about how much notice I may be expected to give, how often and so on. The rights and duties with respect to cooking dinner will be just one of many terms in the implied contract between us. It may be that I accept your right to be provided with dinner on the basis that you pay for the food. And this is only the start. There may be a long and complicated list of implied terms, understood circumstances of breach and possible remedies to rectify breaches.

Ultimately, to maintain the relationship we must both expect to have a net benefit over the longer term. We may be prepared to concede positions in the short term either by negotiating for something else of immediate value, or the value may be deferred as a form of social ‘debt’ with the confidence and expectation that the books will be balanced one day. However, the precise balancing of the books doesn’t matter much so long as the relationship confers a net benefit.

Coercive, financial and other forms of power

Descriptions of power in terms of rights, duties, laws, and social norms refer to the type of power we are used to in democratic society. In some relationships the flexibility to change or negotiate a change in position is severely limited. An authoritarian state may maintain power using the police and the army. An authoritarian person, narcissist or psychopath will also demonstrate an inflexibility over positioning. The authoritarian state or individual may use coercive power. The narcissist and the psychopath may have difficulty in empathising with another person’s position.

Wealth also confers power. People and organisations can be paid to take up particular positions – both in the sense of jobs or in the sense of attitudes. Pay for a marketing campaign and you can change people’s positions on whether they will buy something or vote some way. In modern market-based societies, wealth is legitimised as an acceptable way of granting power to people and organisations that are seen to confer benefit on society. However, wealth can easily be used both subtilely and coercively to change people’s positions to align with value systems that are not their own.

Youtube video, How to understand power – Eric Liu, TED-Ed, November 2014, 7:01 minutes

Positioning in organisations

Just as important are the positions taken within an organisation and the various dilemmas and tensions that these reveal. For example, for most organisations there is a constant tension between quality and cost. Some parts of the organisation will be striving to keep costs down while others are striving to maintain quality. Exactly how this plays out, and how it matches to the demand in the market, may determine a products success or failure. The National Health Service (NHS) in the UK is a classic example of a publicly funded organisation that is in a constant struggle to maintain quality standards within cost constraints.

Different parts of the organisation will take different positions on the importance of various stakeholders. The board may be concerned about shareholder value, the management concerned to satisfy customers and the workers concerned for the welfare of the staff. The R&D department may be more concerned about innovation and the sales force more concerned about the success of the current product lines. Again, by making explicit the positions of each group, it is possible to identify differences, debate the trade-offs and more readily arrive at policies and actions that are agreed to serve their mutual interests. Where tensions cannot be resolved at lower levels in the organisation, they can become the concern of the executive (see ‘The Executive Function’).

Another type of positioning has an important role to play in organisations. A commercial company may spend a lot of effort identifying and maintaining its brand and market positioning. This is its position with respect to its competitors and it’s customers, and helps define its unique selling points (USPs).

Youtube Video, Marketing: Segmentation – Targeting – Positioning, tutor2u, April 2016, 4:08 minutes

‘Don’t ask for permission ask for forgiveness’ is a mantra chanted by people and companies that put a premium on innovation. How we each act is not only determined by what we can do. It is a matter of both what we can do and what we are permitted to do. We can be ‘permitted by others’ that confer on us the right to do it, and by the rights we confer on ourselves. If we seek forgiveness rather than permission we are conferring on ourselves the right to take risks then respond to the errors we make that cross the boundaries of the rights and duties other people confer on us.

We live in a competitive social world where we may have some choice over our trades in rights and duties. If I take the position that employer X is not paying me enough for the job I do, I can potential go to employer Y instead. However, there are costs and uncertainties in switching that make social systems relatively stable. The distribution of power is therefore constrained to some extent by the ‘free market’ in the trading of rights and duties.

Positioning in language and culture

Positioning can involve ascribing attributes to people (e.g. she is strong, he is kind etc.).

Every time you label something you are taking a position.

Linguistic labels can have a powerful influence within a culture because they can come heavily laden with expectations about rights and responsibilities. Ascribing the attribute ‘disabled’ or ‘migrant’, for example, may confer rights to benefits, and may confer a duty on others to help the vulnerable overcome their difficulties. Ascribing the attribute ‘female’, until relatively recently, assigned different legal rights and duties to the attribute ‘male’. However, the positioning can extend far beyond legal rights and duties to a whole range of less explicit rights and duties that can be instrumental in determining power relationships.

It is not always appreciated that the labels put on people, positions them to such an extent with respect to both explicit and implicit rights and duties, and it is easy to use labels without a full appreciation of the consequences. The labels we put on people are not isolated judgements or positions. Through leaned associations, they come in clusters. So to label somebody as ‘intelligent’ is also to imply that they are knowledgeable and reasonable. It even implies that there is a good chance that they will wear glasses. The label brings to mind a whole stereotype that may involve many detailed characteristics.

This is both useful and problematic. It is useful because it prepares us to expect certain things, and that saves us having to work out everything from detailed observation and first principles. It is a problem because no particular instance is likely to conform to the stereotype and there is a good chance that we will misinterpret their actions or intentions in particular situations. Particularly pernicious is when, through stereotyping, we position somebody along the dimensions ‘friend or foe’ (or ‘inferior – superior’) because of the numerous implications for the way in which we infer rights and duties from this, and hence how we behave in relation to them.

Particularly pernicious is when language is use to mislead. This is often the case in the language of politics and the language of advertising.

The terms used to describe a policy or product can create highly misleading expectations.

Youtube video, Language of Politics – Noam Chomsky, Serious Science, September 2014, 12:45 minutes

George Orwell in his book ‘1984’ understood only too well how language can be used to influence and constrain thought.

Youtube Video, George Orwell 1984 Newspeak, alawooz, June 2013, 23:08 minutes

Positions, rights and duties

Much of our conversation concerns categorising things and then either implicitly or explicitly ascribing rights and responsibilities. So we may gossip on the bus about whether a schoolmate is a bully, whether a person is having an affair or if someone is a good neighbour. In so doing we are making evaluations – or, in other words, taking positions.

The bully has no right to act as they do and confers on others the right to punish. Similarly, the person having an affair may be seen as neglecting a duty of fidelity and therefore also relinquishing rights. The good neighbour may be going beyond their duty and attracting the right of respect.

Between two individuals much discussion involves negotiation over rights and duties and what constitutes fair trade-offs, both in principle and in practice. If one person does something for another, an implicit principle of fairness through reciprocation, creates an obligation (a duty that can be deferred) to do something of equal value (but not necessarily at equal cost) in return. A perceived failure to perform a duty may create a storyline of victimisation in the mind of one party that the other party may be blissfully unaware of, unless conversation takes place to resolve it.

When you have a duty, it is generally to another person or organisation. Typically, you have a duty when you have the power to overcome another person’s vulnerability. So if a person is too short to reach something on a high shelf, and you are tall enough to reach it, people tend to believe that you have a duty to do so.

To claim a right is to admit a vulnerability and to assert that somebody with the power to address that vulnerability, will do so.

The right to a fair trial admits a vulnerability to the rushed judgement of the crowd (or the monarch), and confers a duty on the judicial system to protect you from this. The right to citizenship and healthcare admits to vulnerabilities with regard to security and health and confers a duty on the state to provide it.

Youtube video, What Are Rights? Duty & The Law | Philosophy Tube, Philosophy Tube, January 2016, 6:41 minutes

Positioning, ethics and morality

Psychologists Lawrence Kohlberg, in 1958, developed a test of moral reasoning and proposed a number of stages of development in being able to take moral positions. The higher the level the greater the ability to take into account a range of moral positions. A small child may focus on only one aspect of a moral problem. At later stages a person will take into account the positions of different interests – the family, the community, the law and so on. At stage 4 there is an understanding of social order. At stage 6 (a stage that very few people reach) a person is able to reason through a complete range of moral positions. Most adults operate at levels 3 or 4. Kohlberg methods have since been questioned and elaborated. One theory is that we act morally because of our emotional reactions to a situation and that moral reasoning is more of a social act when persuading other people. There are also cultural differences in the importance attributed to moral positions.

BBC Radio 4, Mind Changers: The Heinz Dilemma, September 2008, 27:32 minutes
http://www.bbc.co.uk/programmes/b008drfq#play

Positioning in international relations

International relations are nearly always set within the context of multiple parties. Even when considering Arab / Israeli or US/Mexico there is a context that involves many other parties and positions are held in the light of alignments with close ‘allies’. In fact the context can be quite entangled and confusing as in the case of Syria (involving the Syrian regime, the Syrian people, the Islamic State, the Russians and the US as well as many other factions let alone international groupings such as the United Nations and charities). Most importantly any government or regime may have to square its position on the international stage with it’s position within its own country. All these factors considerably reduce the flexibility of re-positioning, except when circumstances configure in such a way that there is a window of opportunity.

Examples of international conflicts can be found at:

http://foreignpolicy.com/2017/01/05/10-conflicts-to-watch-in-2017/

Often in international relations it is difficult to establish a parties true costs and true values because parties may hide or exaggerate these to seek a negotiation advantage. It is a matter of working out for each party where there is least rigidity on a set of relevant positions, defining small changes from one (set of) position to another and then working out how to present this change to different parties in terms of their own values, language and objectives.

Youtube video, Negotiations | Model Diplomacy, Council on Foreign Relations, November 2016, 4:57 minutes

It can be important to have a neutral or otherwise acceptable party present propositions or lead negotiations. In terms of how it will be received, the source of a communication can be more influential than the communication itself.

Separating out the underlying reality and logic of the positions from how they are presented and by whom is a first step in resolving conflict. However, throw in unpredictable factors, like a US president failing to follow any previous logic or process, and any such model can break down.

The hidden positions of designed objects and procedures

All artifacts contain embedded positions. So, a door handle embeds the position that it is ok to open the door and a microphone embeds the position that it is ok to record or amplify sound. Even more subtle, is that merely making one thing more readily available that another can embed a position. So, if there is a piano in a bar or a railway station, then it automatically raises the possibility that it may be ok to play it.

This characteristic of all objects and artifacts has a specific name within psychology. It is called ‘affordance’. The door handle affords opening and the piano in a public place affords playing.

Youtube video, Universal Design Principles 272 – Affordance, anna gustafsson, October 2014, 2:10 minutes

Looking at things from this point of view may be difficult to grasp but it has massive implications. These positions are, in one sense obvious, but in another they are difficult to see. They can be so obvious that they go unquestioned and are effectively hidden from scrutiny. They can easily be used to manipulate and exert power, without people being particularly aware of it.

There are several examples that apply to the design of procedures:

• A corporation, for example, may make it very easy for you to buy a service but difficult for you to discontinue it (e.g. subscriptions that automatically renew that require you to contact the right person with the right subscription information, both of which are hard to find, before you can cancel).
• A government may have you complete a long form and meet many requirements in order to claim benefits, while providing many small reasons for a benefit to be taken away.
• Another classic and more obvious example is how, in the UK, energy companies offer a range of time limited tariffs and switch you to a more expensive tariff at the end of the period, requiring you to make the effort to switch suppliers or pay substantially more (as much as 50% extra) for energy.

These subtle affordances are often just accepted or overlooked as just being ‘the way things work’, but when all one’s energy is taken up dealing with the trivia of everyday life, they turn out to be a powerful force that ‘keeps you in your place’ (whether or not they are deliberately designed to do so).

Positioning and narrative

An utterance in a conversation can mean entirely different things depending upon the context. So if I ask ‘Did you pass the paper?’ I will mean quite different things if I am referring to an incident where somebody left a newspaper on a train, to if we had been talking about a recent exam, to if we had been talking about a paper being considered by a committee.

The storyline is different in each case and the position I take in asking the question may also be different. My question may be simple curiosity, the expression of a hope or may determine my intention to act in a particular way, also depending on the context or storyline. In fact, my position will probably be unclear unless I explain it. It is more than likely that you will interpret it one way, in accordance with your theory about what’s going on, while I mean it a different way, according to my own. Furthermore we may never realise it and be quite surprised should we compare our accounts of the conversation at a later date.

Youtube Video, Positioning Theory, ScienceEdResearch, July 2017, 6:02 minutes

By contrast, the blog post called ‘It’s like this’ notes how ‘the single story’ (a fixed and commonly held interpretation or position) can trap whole groups of people into a particular way in which others see them and how they themselves see the world. One way or another ‘position’ has a powerful influence.

Positioning theory integrates

This tour around the many applications of ‘positioning theory’ shows how it integrates many of the concepts being put forward in this series of blog postings. It is a powerful tool for understanding the individual, the individual in the context of others, social institutions in relation to each other and institutions in relation to the individual. In its relation to rights and duties it addresses some of the dynamics of power and control. It even transcends the distinction between people and objects, and has profound implications for the development of artificial intelligence.

– Can we trust blockchain in an era of post truth?

Post Truth and Trust

The term ‘post truth’ implies that there was once a time when the ‘truth’ was apparent or easy to establish. We can question whether such a time ever existed, and indeed the ‘truth’, even in science, is constantly changing as new discoveries are made. ‘Truth’, ‘Reality’ and ‘History’, it seems, are constantly being re-constructed to meet the needs of the moment. Philosophers have written extensively about the nature of truth and this is an entire branch of philosophy called ‘epistemology’. Indeed my own series of blogs starts with a posting called ‘It’s Like This’ that considers the foundation of our beliefs.

Nevertheless there is something behind the notion of ‘post truth’. It arises out of the large-scale manufacture and distribution of false news and information made possible by the internet and facilitated by the widespread use of social media. This combines with a disillusionment in relation to almost all types of authority including politicians, media, doctors, pharmaceutical companies, lawyers and the operation of law generally, global corporations, and almost any other centralised institution you care to think of. In a volatile, uncertain, changing and ambiguous world who or what is left that we can trust?

YouTube Video, Astroturf and manipulation of media messages | Sharyl Attkisson | TEDxUniversityofNevada, TEDx Talks, February 2015, 10:26 minutes

All this may have contributed to the popularism that has led to Brexit and Trump and can be said to threaten our systems of democracy. However, to paraphrase Churchill’s famous remark ‘democracy is the worst form of Government, except for all the others’. But, does the new generation of distributed and decentralising technologies provide a new model in which any citizen can transact with any other citizen, on any terms of their choosing, bypassing all systems of state regulation, whether they be democratic or not. Will democracy become redundant once power is fully devolved to the individual and individuals become fully accountable for their every action?

Trust is the crucial notion that underlies belief. We believe who we trust and we put our trust in the things we believe in. However, in a world where we experience so many differing and conflicting viewpoints, and we no longer unquestioningly accept any one authority, it becomes increasingly difficult to know what to trust and what to believe.

To trust something is to put your faith in it without necessarily having good evidence that it is worthy of trust. If I could be sure that you could deliver on a promise then I would not need to trust you. In religion, you put your trust in God on faith alone. You forsake the need for evidence altogether, or at least, your appeal is not to the sort of evidence that would stand up to scientific scrutiny or in a court of law.

Blockchain to the rescue

Blockchain is a decentralised technology for recording and validating transactions. It relies on computer networks to widely duplicate and cross-validate records. Records are visible to everybody providing total transparency. Like the internet it is highly distributed and resilient. It is a disruptive technology that has the potential to decentralise almost every transactional aspect of everyday life and replace third parties and central authorities.

YouTube Video, Block chain technology, GO-Science, January 2016, 5:14 minutes

Blockchain is often described as a ‘technology of trust’, but its relationship to trust is more subtle than first appears. Whilst Blockchain promises to solve the problem of trust, in a twist of irony, it does this by creating a kind of guarantee, and by creating the guarantee you no longer have to be concerned about trusting another party to a transaction because what you can trust is the Blockchain record of what you agreed. You can trust this record, because, once you understand how it works, it becomes apparent that the record is secure and cannot be changed, corrupted, denied or mis-represented.

Youtube Video, Blockchain 101 – A Visual Demo, Anders Brownworth, November 2016, 17:49 minutes

It has been argued that Blockchain is the next revolution in the internet, and indeed, is what the internet should have been based on all along. If, for example, we could trace the providence of every posting on Facebook, then, in principle, we would be able to determine its true source. There would no longer be doubt about whether or not the Russian’s hacked into the Democratic party computer systems because all access would be held in a publicly available, widely distributed, indelible record.

However, the words ‘in principle’ are crucial and gloss over the reality that Blockchain is just one of many building-blocks towards the guarantee of trustworthiness. What if the Russians paid a third-party in untraceable cash to hack into records or to create false news stories? What if A and B carry out a transaction but unknowing to A, B has stolen C’s identity? What if there are some transactions that are off the Blockchain record (e.g. the subsequent sale of an asset) – how do they get reconciled with what is on the record? What if somebody one day creates a method of bringing all computers to a halt or erasing all electronic records? What if somebody creates a method by which the provenance captured in a Blockchain record were so convoluted, complex and circular that it was impossible to resolve however much computing power was thrown at it?

I am not saying that Blockchain is no good. It seems to be an essential underlying component in the complicated world of trusting relationships. It can form the basis on which almost every aspect of life from communication, to finance, to law and to production can be distributed, potentially creating a fairer and more equitable world.

YouTube Video, The four pillars of a decentralized society | Johann Gevers | TEDxZug, TEDx Talks, July 2014, 16:12 minutes

Also, many organisations are working hard to try and validate what politicians and others say in public. These are worthy organisations and deserve our support. Here are just a couple:

Full Fact is an independent charity that, for example, checks the facts behind what politicians and other say on TV programmes like BBC Question Time. See: https://fullfact.org. You can donate to the charity at: https://fullfact.org/donate/

More or Less is a BBC Radio programme (over 300 episodes) that checks behind purported facts of all sorts (from political claims to ‘facts’ that we all take for granted without questioning them). http://www.bbc.co.uk/programmes/p00msxfl/episodes/player

However, even if ‘the facts’ can be reasonably established, there are two perspectives that undermine what may seem like a definitive answer to the question of trust. These are the perspectives of constructivism and intent.

Constructivism, intent, and the question of trust

From a constructivist perspective it is impossible to put a definitive meaning on any data. Meaning will always be an interpretation. You only need to look at what happens in a court of law to understand this. Whatever the evidence, however robust it is, it is always possible to argue that it can be interpreted in a different way. There is always another ‘take’ on it. The prosecution and the defence may present an entirely different interpretation of much the same evidence. As Tony Benn once said, ‘one man’s terrorist is another man’s freedom fighter’. It all depends on the perspective you take. Even a financial transaction can be read a different ways. While it’s existence may not be in dispute, it may be claimed that it took place as a result of coercion or error rather than freely entered into. The meaning of the data is not an attribute of the data itself. It is at least, in part, at attribute of the perceiver.

Furthermore, whatever is recorded in the data, it is impossible to be sure of the intent of the parties. Intent is subjective. It is sealed in the minds of the actors and inevitably has to be taken on trust. I may transfer the ownership of something to you knowing that it will harm you (for example a house or a car that, unknown to you, is unsafe or has unsustainable running costs). On the face of it the act may look benevolent whereas, in fact, the intent is to do harm (or vice versa).

Whilst for the most part we can take transactions at their face value, and it hardly makes sense to do anything else, the trust between the parties extends beyond the raw existence of the record of the transaction, and always will. This is not necessarily any different when an authority or intermediary is involved, although the presence of a third-party may have subtle effects on the nature of the trust between the parties.

Lastly, there is the pragmatic matter of adjudication and enforcement in the case of breaches to a contract. For instantaneous financial transactions there may be little possibility of breach in terms of delivery (i.e. the electronic payments are effected immediately and irrevocably). For other forms of contract though, the situation is not very different from non-Blockchain transactions. Although we may be able to put anything we like in a Blockchain contract – we could, for example, appoint a mutual friend as the adjudicator over a relationship contract, and empower family members to enforce it, we will still need the system of appeals and an enforcer of last resort.

I am not saying is that Blockchain is unnecessarily or unworkable, but I am saying that it is not the whole story and we need to maintain a healthy scepticism about everything. Nothing is certain.


Further Viewing

Psychological experiments in Trust. Trust is more situational than we normally think. Whether we trust somebody often depends on situational cues such as appearance and mannerisms. Some cues are to do with how similar one persona feels to another. Cues can be used to ascribe moral intent to robots and other artificial agents.

YouTube Video, David DeSteno: “The Truth About Trust” | Talks at Google, Talks at Google, February 2014, 54:36 minutes


Trust is a dynamic process involving vulnerability and forgiveness and sometimes needs to be re-built.

YouTube Video, The Psychology of Trust | Anne Böckler-Raettig | TEDxFrankfurt, TEDx Talks, January 2017, 14:26 minutes


More than half the world lives in societies that document identity, financial transactions and asset ownership, but about 3 billion people do not have the advantages that the ability to prove identity and asset ownership confers. Blockchain and other distributed technologies can provide mechanisms that can directly service the documentation, reputational, transactional and contractual needs of everybody, without the intervention of nation states or other third parties.

YouTube Video, The future will be decentralized | Charles Hoskinson | TEDxBermuda, TEDx Talks, December 2014, 13:35 minutes

– Ways of knowing (HOS 4)

How do we know what we know?

This article considers:

(1) the ways we come to believe what we think we know

(2) the many issues with the validation of our beliefs

(3) the implications for building artificial intelligence and robots based on the human operating system.


I recently came across a video (on the site http://www.theoryofknowledge.net) that identified the following ‘ways of knowing’:

  • Sensory perception
  • Memory
  • Intuition
  • Reason
  • Emotion
  • Imagination
  • Faith
  • Language

This list is mainly about mechanisms or processes by which an individual acquires knowledge. It could be supplemented by other processes, for example, ‘meditation’, ‘science’ or ‘history’, each of which provides its own set of approaches to generating new knowledge for both the individual and society as a whole. There are many difference ways in which we come to formulate beliefs and understand the world.

Youtube Video, Theory of Knowledge: Ways of Knowing, New College of Humanities, December 2014, 9:32 minutes


In the spirit of working towards a description of the ‘human operating system’, it is interesting to consider how a robot or other Artificial Intelligence (AI), that was ‘running’ the human operating system, would draw on its knowledge and beliefs in order to solve a problem (e.g. resolve some inconsistency in its beliefs). This forces us to operationalize the process and define the control mechanism more precisely. I will work through the above list of ‘ways of knowing’ and illustrate how each might be used.


Let’s say that the robot is about to go and do some work outside and, for a variety of reasons, needs to know what the weather is like (e.g. in deciding whether to wear protective clothing, or how suitable the ground is for sowing seeds or digging up for some construction work etc.) .

First it might consult its senses. It might attend to its visual input and note the patterns of light and dark, comparing this to known states and conclude that it was sunny. The absence of the familiar sound patterns (and smell) of rain might provide confirmation. The whole process of matching the pattern of data it is receiving through its multiple senses, with its store of known patterns, can be regarded as ‘intuitive’ because it is not a reasoning process as such. In the Khanemman sense of ‘system 1’ thinking, the robot just knows without having to perform any reasoning task.

Youtube Video, System 1 and System 2, Stoic Academy, February 2017, 1:26 minutes

The knowledge obtained from matching perception to memory can nevertheless be supplemented by reasoning, or other forms of knowledge that confirm or question the intuitively-reached conclusion. If we introduce some conflicting knowledge, e.g. that the robot thinks it’s the middle of the night in it’s current location, we then create a circumstance in which there is dissonance between two sources of knowledge – the perception of sunlight and the time of day. This assumes the robot has elaborated knowledge about where and when the sun is above the horizon and can potentially shine (e.g. through language – see below).

In people the dissonance triggers the emotional state of ‘surprise’ and the accompanying motivation to account for the contradiction.

Youtube Video, Cognitive Dissonance, B2Bwhiteboard, February 2012, 1:37 minutes

Likewise, we might label the process that causes the search for an explanation in the robot as ‘surprise’. An attempt may be made to resolve this dissonance through Kahneman’s slower, more reasoned, system 2 thinking. Either the perception is somehow faulty, or the knowledge about the time of day is inaccurate. Maybe the robot has mistaken the visual and audio input as coming from its local senses when in fact the input has originated from the other side of the world. (Fortunately, people do not have to confront the contradictions caused by having distributed sensory systems).

Probably in the course of reasoning about how to reconcile the conflicting inputs, the robot will have had to run through some alternative possible scenarios that could account for the discrepancy. These may have been generated by working through other memories associated with either the perceptual inputs or other factors that have frequently led to mis-interpretations in the past. Sometimes it may be necessary to construct unique possible explanations out of component part explanations. Sometimes an explanation may emerge through the effect of numerous ideas being ‘primed’ through the spreading activation of associated memories. Under these circumstances, you might easily say that the robot was using it’s imagination in searching for a solution that had not previously been encountered.

Youtube Video, TEDxCarletonU 2010 – Jim Davies – The Science of Imagination, TEDx Talks, September 2010, 12:56 minutes

Lastly, to faith and language as sources of knowledge. Faith is different because, unlike all the other sources, it does not rely on evidence or proof. If the robot believed, on faith, that the sun was shining, any contradictory evidence would be discounted, perhaps either as being in error or as being irrelevant. Faith is often maintained by others, and this could be regarded as a form of evidence, but in general if you have faith in or trust something, it is at least filling the gap between the belief and the direct evidence for it.

Here is a religious account of faith that identifies it with trust in the reliability of God to deliver, where the main delivery is eternal life.

Youtube video, What is Faith – Matt Morton – The Essence of Faith – Grace 360 conference 2015,Grace Bible Church, September 2015, 12:15 minutes

Language as a source of evidence is a catch-all for the knowledge that comes second hand from the teachings and reports of others. This is indirect knowledge, much of which we take on trust (i.e. faith), and some of which is validated by direct evidence or other indirect evidence. Most of us take on trust that the solar system exists, that the sun is at the centre, and that earth is in the third orbit. We have gained this knowledge through teachers, friends, family, tv, radio, books and other sources that in their turn may have relied on astronomers and other scientist who have arrived at these conclusions through observation and reason. Few of us have made the necessary direct observations and reasoned inferences to have arrived at the conclusion directly. If our robot were to consult databases of known ‘facts’, put together by people and other robots, then it would be relying on knowledge through this source.

Pitfalls

People like to think that their own beliefs are ‘true’ and that these beliefs provide a solid basis for their behaviour. However, the more we find out about the psychology of human belief systems the more we discover the difficulties in constructing consistent and coherent beliefs, and the shortcomings in our abilities to construct accurate models of ‘reality’. This creates all kinds of difficulties amongst people in their agreements about what beliefs are true and therefore how we should relate to each other in peaceful and productive ways.


If we are now going on to construct artificial intelligences and robots that we interact with and have behaviours that impact the world, we want to be pretty sure that the beliefs a robot develops still provide a basis for understanding their behaviour.


Unfortunately, every one of the ‘ways of knowing’ is subject to error. We can again go through them one by one and look at the pitfalls.

Sensory perception: We only have to look at the vast body of research on visual illusion (e.g. see ‘Representations of Reality – Part 1’) to appreciate that our senses are often fooled. Here are some examples related to colour vision:

Youtube Video, Optical illusions show how we see | Beau Lotto,TED, October 2009, 18:59 minutes

Furthermore, our perceptions are heavily guided by what we pay attention to, meaning that we can miss all sorts of significant and even life-threatening information in our environment. Would a robot be similarly misled by its sensory inputs? It’s difficult to predict whether a robot would be subject to sensory illusions, and this might depend on the precise engineering of the input devices, but almost certainly a robot would have to be selective in what input it attended to. Like people, there could be a massive volume of raw sensory input and every stage of processing from there on would contain an element of selection and interpretation. Even differences in what input devices are available (for vision, sound, touch or even super-human senses like perception of non-visual parts of the electromagnetic spectrum), will create a sensory environment (referred to as the ‘umwelt’ or ‘merkwelt’in ethology) that could be quite at variance with human perceptions of the world.

YouTube Video, What is MERKWELT? What does MERKWELT mean? MERKWELT meaning, definition & explanation, The Audiopedia, July 2017, 1:38 minutes


Memory: The fallibility of human memory is well documented. See, for example, ‘The Story of Your Life’, especially the work done by Elizabeth Loftus on the reliability of memory. A robot, however, could in principle, given sufficient storage capacity, maintain a perfect and stable record of all its inputs. This is at variance with the human experience but could potentially mean that memory per se was more accurate, albeit that it would be subject to variance in what input was stored and the mechanisms of retrieval and processing.


Intuition and reason: This is the area where some of the greatest gains (and surprises) in understanding have been made in recent years. Much of this progress is reported in the work of Daniel Kahneman that is cited many times in these writings. Errors and biases in both intuition (system 1 thinking) and reason (system 2 thinking) are now very well documented. A long list of cognitive biases can be found at:

https://en.wikipedia.org/wiki/List_of_cognitive_biases

Would a robot be subject to the same type of biases? It is already established that many algorithms, used in business and political campaigning, routinely build in the biases, either deliberately or inadvertently. If a robot’s processes of recognition and pattern matching are based on machine learning algorithms that have been trained on large historical datasets, then bias is virtually guaranteed to be built into its most basic operations. We need to treat with great caution any decision-making based on machine learning and pattern matching.

Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes

As for reasoning, there is some hope that the robustness of proofs that can be achieved computationally may save the artificial intelligence or robot from at least some of the biases of system 2 thinking.


Emotion: Biases in people due to emotional reactions are commonplace. See, for example:

Youtube Video, Unconscious Emotional Influences on Decision Making, The Rational Channel, February 2017, 8:56 minutes

However, it is also the case that emotions are crucial in decision–making. Emotions often provide the criteria and motivation on which decisions are made and without them, people can be severely impaired in effective decision-making. Also, emotions provide at least one mechanism for approaching the subject of ethics in decision-making.

Youtube Video, When Emotions Make Better Decisions – Antonio Damasio, FORA.tv, August 2009, 3:22 minutes

Can robots have emotions? Will robots need emotions to make effective decisions? Will emotions bias or impair a robot’s decision-making. These are big questions and are only touched on here, but briefly, there is no reason why emotions cannot be simulated computationally although we can never know if an artificial computational device will have the subjective experience of emotion (or thought). Probably some simulation of emotion will be necessary for robot decision-making to align with human values (e.g. empathy) and, yes, a side-effect of this may well be to introduce bias into decision-making.

For a selection of BBC programmes on emotions see:
http://www.bbc.co.uk/programmes/topics/Emotions?page=1


Imagination: While it doesn’t make much sense to talk about ‘error’ when it comes to imagination, we might easily make value-judgments about what types of imagination might be encouraged and what might be discouraged. Leaving aside debates about how, say excessive experience of violent video games, might effect imagination in people, we can at least speculate as to what might or should go on in the imagination of a robot as it searches through or creates new models to help predict the impacts of its own and others behaviours.

A big issue has arisen as to how an artificial intelligence can explain its decision-making to people. While AI based on symbolic reasoning can potentially offer a trace describing the steps it took to arrice at a conclusion, AIs based on machine learning would be able to say little more than ‘I recognized the pattern as corresponding to so and so’, which to a person is not very explanatory. It turns out that even human experts are often unable to provide coherent accounts of their decision-making, even when they are accurate.

Having an AI or robot account for its decision-making in a way understandable to people is a problem that I will address in later analysis of the human operating system and, I hope, provide a mechanism that bridges between machine learning and more symbolic approaches.


Faith: It is often said that discussing faith and religion is one of the easiest ways to lose friends. Any belief based on faith is regarded as true by definition, and any attempt to bring evidence to refute it, stands a good chance of being regarded as an insult. Yet people have different beliefs based on faith and they cannot all be right. This not only creates a problem for people, who will fight wars over it, but it is also a significant problem for the design of AIs and robots. Do we plug in the Muslim or the Christian ethics module, or leave it out altogether? How do we build values and ethical principles into robots anyway, or will they be an emergent property of its deep learning algorithms. Whatever the answer, it is apparent that quite a lot can go badly wrong if we do not understand how to endow computational devices with this ‘way of knowing’.


Language: As observed above, this is a catch-all for all indirect ‘ways of knowing’ communicated to people through media, teaching, books or any other form of communication. We only have to consider world wars and other genocides to appreciate that not everything communicated by other people is believable or ethical. People (and organizations) communicate erroneous information and can deliberately lie, mislead and deceive.

We strongly tend to believe information that comes from the people around us, our friends and associates, those people that form part of our sub-culture or in-group. We trust these sources for no other reason than we are familiar with them. These social systems often form a mutually supporting belief system, whether or not it is grounded in any direct evidence.

Youtube Video, The Psychology of Facts: How Do Humans (mis)Trust Information?, YaleCampus, January 2017

Taking on trust the beliefs of others that form part of our mutually supporting social bubble is a ‘way of knowing’ that is highly error prone. This is especially the case when combined with other ‘ways of knowing’, such as faith, that in their nature cannot be validated. Will robot communities develop, who can talk to each other instantaneously and ‘telepathically’ over wireless connections, also be prone to the bias of groupthink?


The validation of beliefs

So, there are multiple ways in which we come to know or believe things. As Descartes argued, no knowledge is certain (see ‘It’s Like This’). There are only beliefs, albeit that we can be more sure of some that others, normally by virtue of their consistency with other beliefs. Also, we note that our beliefs are highly vulnerable to error. Any robot operating system that mimics humans will also need to draw on the many different ‘ways of knowing’ including a basic set of assumptions that it takes to be true without necessarily any supporting evidence (it’s ‘faith’ if you like). There will also need to be many precautions against AIs and robots developing erroneous or otherwise unacceptable beliefs and basing their behaviours on these.

There is a mechanism by which we try to reconcile differences between knowledge coming from different sources, or contradictory knowledge coming from the same source. Most people seem to be able to tolerate a fair degree of contradiction or ambiguity about all sorts of things, including the fundamental questions of life.

Youtube Video, Defining Ambiguity, Corey Anton, October 2009, 9:52 minutes

We can hold and work with knowledge that is inconsistent for long periods of time, but nevertheless there is a drive to seek consistency.

In the description of the human operating system, it would seem that there are many ways in which we establish what we believe and what beliefs we will recruit to the solving of any particular problem. Also, the many sources of knowledge may be inconsistent or contradictory. When we see inconsistencies in others we take this as evidence that we should doubt them and trust them less.

Youtube Video, Why Everyone (Else) is a Hypocrite, The RSA, April 2011, 17:13 minutes

However, there is, at least, a strong tendency in most people, to establish consistency between beliefs (or between beliefs and behaviours), and to account for inconsistencies. The only problem is that we are often prone to achieve consistency by changing sound evidence-based beliefs in preference to the strongly held beliefs based on faith or our need to protect our sense of self-worth.

Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011. 4:31 minutes

From this analysis we can see that building AIs and robots is fraught with problems. The human operating system has evolved to survive, not to be rational or hold high ethical values. If we just blunder into building AIs and robots based on the human operating system we can potentially make all sorts of mistakes and give artificial agents power and autonomy without understanding how their beliefs will develop and the consequences that might have for people.

Fortunately there are some precautions we can take. There are ways of thinking that have been developed to counter the many biases that people have by default. Science is one method that aims to establish the best explanations based on current knowledge and the principle of simplicity. Also, critical thinking has been taught since Aristotle and fortunately many courses have been developed to spread knowledge about how to assess claims and their supporting arguments.

Youtube Video, Critical Thinking: Issues, Claims, Arguments, fayettevillestatenc, January 2011

Implications

To summarise:

Sensory perception – The robot’s ‘umwelt’ (what it can sense) may well differ from that of people, even to the extent that the robot can have super-human senses such as infra-red / x-ray vision, super-sensitive hearing and smell etc. We may not even know what it’s perceptual world is like. It may perceive things we cannot and miss things we find obvious.

Memory – human memory is remarkably fallible. It is not so much a recording, as a reconstruction based on clues, and influenced by previously encountered patterns and current intentions. Given sufficient storage capacity, robots may be able to maintain memories as accurate recording of the states of their sensory inputs. However, they may be subject to similar constraints and biases as people in the way that memories are retrieved and used to drive decision-making and behaviour.

Intuition – if the robot’s pattern-matching capabilities are based on the machine learning of historical training sets then bias will be built into its basic processes. Alternatively, if the robot is left to develop from it’s own experience then, as with people, great care has to be taken to ensure it’s early experience will not lead to maladaptive behaviours (i.e. behaviours not acceptable to the people around it).

Reason – through the use of mathematical and logical proofs, robots may well have the capacity to reason with far greater ability than people. They can potentially spot (and resolve) inconsistencies arising out of different ‘ways of knowing’ with far greater adeptness than people. This may create a quite different balance between how robots make decisions and how people do using emotion and reason in tandem.

Emotion – human emotion are general states that arise in response to both internal and external events and provide both the motivation and the criteria on which decisions are made. In a robot, emerging global states could also potentially act to control decision-making. Both people, and potentially robots, can develop the capacity to explicitly recognize and control these global states (e.g. as when suppressing anger). This ability to reflect, and to cause changes in perspective and behaviour, is a kind of feedback loop that is inherently unpredictable. Not having sufficient understanding to predict how either people or robots will react under particular circumstances, creates significant uncertainty.

Imagination – much the same argument about predictability can be made about imagination. Who knows where either a person’s or a robot’s imagination may take them? Chess computers out-performed human players because of their capacity to reason in depth about the outcomes of every move, not because they used pattern-matching based on machine learning (although it seems likely that this approach will have been tried and succeeded by now). Robots can far exceed human capacities to reason through and model future states. A combination of brute force computing and heuristics to guide search, may have far-reaching consequences for a robot’s ability to model the world and predict future outcomes, and may far exceed that of people.

Faith – faith is axiomatic for people and might also be for robots. People can change their faith (especially in a religious, political or ethical sense) but more likely, when confronted with contradictory evidence or sufficient need (i.e. to align with a partner’s faith) people with either ignore the evidence or find reasons to discount it. This way can lead to multiple interpretations of the same basic axioms, in the same way as there are many religious denominations and many interpretations of key texts within these. In robots, Asimov’s three laws of robotics would equate to their faith. However, if robots used similar mechanisms as people (e.g. cognitive dissonance) to resolve conflicting beliefs, then in the same way as God’s will can be used to justify any behaviour, a robot may be able to construct a rationale for any behaviour whatever its axioms. There would be no guarantee that a robot would obey its own axiomatic laws.

Communication – The term language is better labeled ‘communication’ in order to make it more apparent that it extends to all methods by which we ‘come to know’ from sources outside ourselves. Since communication of knowledge from others is not direct experience, it is effectively taken on trust. In one sense it is a matter of faith. However, the degree of consistency across external sources and between what is communicated (i.e. that a teacher or TV will re-enforce what a parent has said etc.) and between what is communicated and what is directly observed (for example, that a person does what he says he will do) will reveal some sources as more believable than others. Also we appeal to motive as a method of assessing degree of trust. People are notoriously influenced by the norms, opinions and behaviours of their own reference groups. Robots with their potential for high bandwidth communication could, in principle, behave with the same psychology of the crowd as humans, only much more rapidly and ‘single-mindedly’. It is not difficult to see how the Dr Who image of the Borg, acting a one consciousness, could come about.

Other Ways of Knowing

It is worth considering just a few of the many other ‘ways’ of knowing’ not considered above, partly because some of these might help mitigate some of the risks of human ‘ways of knowing’ .

Science – Science has evolved methods that are deliberately designed to create impartial, robust and consistent models and explanations of the world. If we want robots to create accurate models, then an appeal to scientific method is one approach. In science, patterns are observed, hypotheses are formulated to account for these patterns, and the hypotheses are then tested as impartially as possible. Science also seeks consistency by reconciling disparate findings into coherent overall theories. While we may want robots to use scientific methods in their reasoning, we may want to ensure that robots do not perform experiments in the real world simply for the sake of making their own discoveries. An image of concentration camp scientists comes to mind. Nevertheless, in many small ways robots will need to be empirical rather than theoretical in order to operate at all.

Argument – Just like people, robots of any complexity will encounter ambiguity and inconsistencies. These will be inconsistencies between expectation and actuality, between data from one way of knowing and another (e.g. between reason and faith, or between perception and imagination etc.), or between a current state and a goal state. The mechanisms by which these inconsistencies are resolved will be crucial. The formulation of claims; the identification, gathering and marshalling of evidence; the assessment of the relevance of evidence; and the weighing of the evidence, are all processes akin to science but can cut across many ‘ways of knowing’ as an aid to decision making. Also, this approach may help provide explanations of a robot’s behaviour that would be understandable to people and thereby help bridge the gap between opaque mechanisms, such as pattern matching, and what people will accept as valid explanations.

Meditation – Meditation is a place-holder for the many ways in which altered states of consciousness can lead to new knowledge. Dreaming, for example, is another altered state that may lead to new hypotheses and models based on novel combination of elements that would not otherwise have been brought together. People certainly have these altered states of consciousness. Could there be an equivalent in the robot, and would we want robots to indulge in such extreme imaginative states where we would have no idea what they might consist of? This is not to necessarily attribute consciousness to robots, which is a separate, and probably meta-physical question.

Theory of mind – For any autonomous agent with its own beliefs and intentions, including a robot, it is crucial to its survival to have some notion of the intentions of other autonomous agents, especially when they might be a direct threat to survival. People have sophisticated but highly biased and error-prone mechanisms for modelling the intentions of others. These mechanisms are particularly alert for any sign of threat and, as a proven mechanism, tend to assume threat even when none is present. The people that did not do this, died out. Work in robotics already recognizes that, to be useful, robots have to cooperate with people and this requires some modelling of their intentions. As this last video illustrates, the modelling of others intentions is inherently complex because it is recursive.

YouTube Video, Comprehending Orders of Intentionality (for R. D. Laing), Corey Anton, September 2014, 31:31 minutes

If there is a conclusion to this analysis of ‘ways of knowing’ it is that creating intelligent, autonomous mechanisms, such as robots and AIs, will have inherently unpredictable consequences, and that, because the human operating system is so highly error-prone and subject to bias, we should not necessarily build them in our own image.