Cambridge (UK) is awash with talks at the moment, and many of these are about artificial intelligence. On Tuesday (12th of March 2019) I went to a talk, as part of Cambridge University’s science festival, by José Hernández-Orallo (Universitat Politècnica de València), titled Natural or 'Artificial Intelligence? Measures, Maps and Taxonomies'.
José opened by pointing out that artificial intelligence was not a subset of human intelligence. Rather, it overlaps with it. After all, some artificial intelligence already far exceeds human intelligence in narrow domains such as playing games (Go, Chess etc.) and some identification tasks (e.g. face recognition). But, of course, human intelligence far outstrips artificial intelligence in its breadth and the amount of training needed to learn concepts.
José‘s main message was how, when it comes to understanding artificial intelligence, we (like the political scene in Britain at the moment) are in uncharted territory. We have no measures by which we can compare artificial and human intelligence or to determine the pace of progress in artificial intelligence. We have no maps that enable us to navigate around the space of artificial intelligence offerings (for example, which offerings might be ethical and which might be potentially harmful). And lastly, we have no taxonomies to classify approaches or examples of artificial intelligence.
Whilst there are many competitions and benchmarks for particular artificial intelligence tasks (such as answering quiz questions or more generally reinforcement learning), there is no overall, widely used classification scheme.
My own take on this is to suggest a number of approaches that might be considered. Coming from a psychology and psychometric testing background, I am aware of the huge number of psychological testing instruments for both intelligence and many other psychological traits. See for example, Wikipedia or the British Psychological Society list of test publishers. What is interesting is that, I would guess, most software applications that claim to use artificial intelligence would fail miserably on human intelligence tests, especially tests of emotional and social intelligence. At the same time they might score at superhuman levels with respect to some very narrow capabilities. This illustrates just how far away we are from the idea of the singularity - the point at which artificial intelligence might overtake human intelligence.
Another take on this would be to look at skills. Interestingly, systems like the Amazon's Alexa describe the applications or modules that developers offer as 'skills'. So for example, a skill might be to book a hotel or to select a particular genre of music. This approach defines intelligence as the ability to effectively perform some task. However, by any standard, the skill offered by a typical Alexa 'skill', Google Home or Siri interaction is laughably unintelligent. The artificial intelligence is all in the speech recognition, and to some extent the speech production side. Very little of it is concerned with the domain knowledge. Even so, a skills based approach to measurement, mapping and taxonomy might be a useful way forward.
When it comes to Ethics, There are also some pointers to useful measures, maps and taxonomies. For example the blog post describing Josephine Young’s work identifies a number of themes in AI and data ethics. Also, the video featuring Dr Michael Wilby on the http://www.robotethics.co.uk/robot-ethics-video-links/ page starts with a taxonomy of ethics and then maps artificial intelligence into this framework.
But, overall, I would agree with José that there is not a great deal of work in this important area and that it is ripe for further research. If you are aware of any relevant research then please get in touch.
John Wyatt is a doctor, author and research scientist. His concern is the ethical challenges that arise with technologies like artificial intelligence and robotics. On Tuesday this week (11th March 2019) he gave a talk called ‘What does it mean to be human?’ at the Wesley Methodist Church in Cambridge.
To a packed audience, he pointed out how interactions with artificial intelligence and robots will never be the same as the type of ‘I – you’ relationships that occur between people. He emphasised the important distinction between ‘beings that are born’ and ‘beings that are made’ and how this distinction will become increasingly blurred as our interactions with artificial intelligence become commonplace. We must be ever vigilant against the use of technology to dehumanise and manipulate.
I can see where this is going. The tendency for people to anthropomorphise is remarkably strong - ‘the computer won’t let me do that’, ‘the car has decided not to start this morning’. Research shows that we can even attribute intentions to animated geometrical shapes ‘chasing’ each other around a computer screen, let alone cartoons. Just how difficult is it going to be to not attribute the ‘human condition’ to a chatbot with an indistinguishably human voice or a realistically human robot. Children are already being taught to say ‘please’ and ‘thank you’ to devices like Alexa, Siri and Google Home – maybe a good thing in some ways, but …
One message I took away from this talk was a suggestion for a number of new human rights in this technological age. These are: (1) The right to cognitive liberty (to think whatever you want), (2) The right to mental privacy (without others knowing) (3) The right to mental integrity and (4) The right to psychological continuity - the last two concerning the preservation of ‘self’ and ‘identity’.
A second message was to consider which country was most likely to make advances in the ethics of artificial intelligence and robotics. His conclusion – the UK. That reassures me that I’m in the right place.
See more of John’s work, such as his essay ‘God, neuroscience and human identity’ at his website johnwyatt.com
Writing about ethics in artificial intelligence and robotics can sometimes seem like it’s all doom and gloom. My last post for example covered two talks in Cambridge – one mentioning satellite monitoring and swarms of drones and the other going more deeply into surveillance capitalism where big companies (you know who) collect data about you and sell it on the behavioural futures market.
So it was really refreshing to go to a talk by Dr Danielle Belgrave at Microsoft Research in Cambridge last week that reflected a much more positive side to artificial intelligence ethics. Danielle has spent the last 11 years researching the application of probabilistic modelling to the medical condition of asthma. Using statistical techniques and machine learning approaches she has been able to differentiate between five more or less distinct conditions that are all labelled asthma. Just as with cancer there may be a whole host of underlying conditions that are all given the same name but may in fact have different underlying causes and environmental triggers.
This is important because treating a set of conditions that may have family resemblance (as Wittgenstein would have put it) with the same intervention(s) might work in some cases, not work in others and actually do harm to some people. Where this is leading, is towards personalised medicine, where each individual and their circumstances are treated as unique. This, in turn, potentially leads to the design of a uniquely configured set of interventions optimised for that individual.
The statistical techniques that Danielle uses, attempt to identify the underlying endotypes (sub-types of a condition) from set of phenotypes (the observable characteristics of an individual). Some conditions may manifest in very similar sets of symptoms while in fact they arise from quite different functional mechanisms.
Appearances can be deceptive and while two things can easily look the same, underneath they may in fact be quite different beasts. Labelling the appearance rather than the underlying mechanism can be misleading because it inclines us to assume that beast 1 and beast 2 are related when, in fact the only thing they have in common is how they appear.
It seems likely that for many years we have been administering some drugs thinking we are treating beast 1 when in fact some patients have beast 2, and that sometimes this does more harm than good. This view is supported by the common practice that getting the medication right in asthma, cancer, mental illness and many other conditions, is to try a few things until you find something that works.
But in the same way that, for example, it may be difficult to identify a person’s underlying intentions from the many things that they say (oops, perhaps I am deviating into politics here!), inferring underlying medical conditions from symptoms is not easy. In both cases you are trying to infer something that may be unobservable, complex and changing, from the things you can readily perceive.
We have come so far in just a few years. It was not long ago that some medical interventions were based on myth, guesswork and the unquestioned habits of deeply ingrained practices. We are currently in a time when, through the use of randomly controlled trials, interventions approved for use are at least effective ‘on average’, so to speak. That is, if you apply them to large populations there is significant net benefit, and any obvious harms are known about and mitigated by identifying them as side-effects. We are about to enter an era where it becomes commonplace to personalise medicine to targeted sub-groups and individuals.
It’s not yet routine and easy, but with dedication, skill and persistence together with advances in statistical techniques and machine learning, all this is becoming possible. We must thank people like Dr Danielle Belgrave who have devoted their careers to making this progress. I think most people would agree that teasing out the distinction between appearance and underlying mechanisms is both a generic and an uncontroversially ethical application of artificial intelligence.
A World that can change – before your eyes!
I’ve been to a couple of good talks in Cambridge (UK) this week. First, futurist Sophie Hackford (formally of Singularity University and Wired magazine) gave a fast-paced talk about a wide range of technologies that are shaping the future. If you don’t know about swarms of drones, low orbit satellite monitoring, neural in-plants, face recognition for payments, high speed trains and rocket transportation then you need to, fast. I haven’t found a video of this very recent talk yet, but the one below from a year ago gives a pretty good indication of why we need to think through the ethical issues.
YouTube Video, Tech Round-up of 2017 | Sophie Hackford | CTW 2017, January 2018, 26:36 minutes
The Age of Surveillance Capitalism
The second talk, in some ways, is even more scary. We are already aware that the likes of Google, Facebook and Amazon are closely watching our every move (and hearing our every breath). And now almost every other company that is afraid of being left behind is doing the same thing, But what data are they collecting and how are they using it. They use the data to predict our behaviour and sell it on the behavioural futures market. Not just our computer behaviour but they are also influencing us in the real world. For example, apparently Pokamon Go was an experiment originally dreamed up by Google to see if retailers would pay to host ‘monsters’ to increase footfall past their stores. The talk by Shoshana Zuboff was at the Cambridge University Law Faculty. Here is an interview she did on radio the same day.
BBC Radio 4, Start the Week, Who is Watching You?, Monday 4th February 2019, 42:00 minutes
Useful categorisation of ethical themes
I was at the seminar the other day where I was fortunate enough to encounter Josephine Young from www.methods.co.uk (who mainly do public sector work in the UK).
Josie recently carried out an analysis of the main themes relating to ethics and AI that she found in a variety of sources related to this topic. I have reported these themes below with a few comments.
Many thanks, Josie for this really useful and interesting work.
(Numbers in brackets reflect the number of times this issue was identified).
Data treatment, focus on bias identification (10)
Interrogate the data (9)
Data collection / Use of personal data
Keep data secure (3)
Personal privacy – access, manage and control of personal data (1, 5, 6)
Use data and tools which have the minimum intrusion necessary – privacy (3)
Transparency of data/meta data collection and usage (8)
Self-disclosure and changing the algorithm’s assumptions (10)
Awareness of bias in data and models (8)
Create robust data science models – quality, representation of demographics (3)
Practice understanding of accuracy – transparency (8)
robotethics.co.uk comment on data: Trying to structure this a little, the themes might be categorised into  data ownership and collection (who can collect what data, when and for what purpose),  data storage and security (how is the data securely stored and controlled without loss and any un-permitted access  data processing (what are the permitted operations on the data and the unbiased / reasonable inferences / models that can be derived from it) and  data usage (what applications and processes can use the data or any inferences made from it).
Safety – verifiable (1)
Anticipate the impacts the might arise – economic, social, environmental etc. (4)
Evaluate impact of algorithms in decision-making and publish the results (2)
Algorithms are rated on a risk scale based on impact on individual (2)
Act using these Responsible Innovation processes to influence the direction and trajectory of research (4)
robotethics.co.uk comment on impact: Impact is about assessing the positive and negative effects of AI in the future, whether that be in the short, medium or long term. There is also the question of who is impacted as it is quite possible that the impact of any particular AI product or service might impact one group of people positively and another negatively. Therefore a framework of effect x timescale x affected persons/group might make a start on providing some structure for assessing impact.
Non-subversion – power conferred to AI should respect and improve social and civic processes (1)
Reflect on the purpose, motivations, implications and uncertainties this research may bring (4)
Ensure augmented – not just artificial – AI (8)
Purpose and ecology for the AI system (10)
Human control – choose how and whether to delegate decisions to AI (1)
Backwards compatibility and versioning (8)
robotethics.co.uk comment on purpose: Clearly the intent behind any AI development should be to confer a net benefit on the individual and/or the society generally. The intent should never be to cause harm – even drone warfare is, in principle, justified in terms of conferring a clear net benefit. But this again raises the question of net benefit to whom exactly, how large that benefit is when compared to any downside, and how certain it is that the benefit will materialise (without any unanticipated harmful consequences). It is a matter of how strong and certain the argument is for justifying the intent behind building or deploying a particular AI product or service.
Transparency for how AI systems make decisions (7)
Be as open and accountable as possible – provide explanations, recourse, accountability (3)
Failure transparency (1)
Responsibility and accountability for explaining how AI systems work (7)
Awareness and plan for audit train (8)
Publish details describing the data used to train AI, with assumptions and risk assessment – including bias (2)
A list of inputs used by an algorithm to make a decision should be published (2)
Every algorithm should be accompanied with a description of function, objective and intended impact (2)
Every algorithm should have an identical sandbox version for auditing (2)
robotethics.co.uk comment on transparency: Transparency and accountability are closely related but can be separated out. Transparency is about determining how or why (e.g. how or why an AI made a certain decision) whereas accountability is about determining who is responsible. Having transparency may well help in establishing accountability but they are different. The problem for AI is that, by normal human standards, responsibility resides with the autonomous decision-making agent so long as they are regarded as having ‘capacity’ (e.g. they are not a child or insane) and even then, there can be mitigating circumstances (provocation, self-defence etc.). We are a long way from regarding AIs as having ‘capacity’ in the sense of being able to make their own ethical judgements, so in the short to medium term, the accountability must be traceable to a human, or other corporate, agent. The issue of accountability is further complicated in cases where people and AIs are cooperatively engaged in the same task, since there is human involvement in both the design of the AI and its operational use.
A named member of staff is formally responsible for the algorithm’s actions and decisions (2)
Judicial transparency – auditible by humans (1)
3rd parties that run algorithms on behalf of public sector should be subject to same principles as government algorithms (2)
Intelligibility and fairness (6)
Dedicated insurance scheme, to provide compensation if negative impact (2)
Citizens must be informed when their treatment has been decided/informed by an algorithm (2)
Liberty and privacy – use of personal data should not, or not be perceived to curtail personal liberities (1)
Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems (7)
robotethics.co.uk comment on civic rights: It seems clear that an AI should have no more license to contravene a person’s civil liberties or human rights than another person or corporate entity would. Definitions of human rights are not always clear-cut and differ from place to place. In human society this is dealt with by defaulting to local laws and cultural norms. It seems likely that a care robot made in Japan but operating in, say, the UK would have to operate according to the local laws, as would apply to any other person, product or service.
Highest purpose of AI
Shared prosperity – economic prosperity shared broadly to benefit all of humanity (1)
Flourishing alongside AI (6)
Prioritise the maximum benefit to humanity and the natural environment (7)
Shared benefit – technology should benefit and empower as many people as possible (1)
Purpose of AI should be human flourishing (1)
AI should be developed for the common good (6)
Beneficial intelligence (1)
Compatible with human dignity, rights, freedoms and cultural diversity (1, 5)
Align values and goals with human values (1)
AI will prevent harm (5)
Start with clear user need and public benefit (3)
Embody highest ideals of human rights (7)
robotethics.co.uk comment on the higher purpose of AI: This seems to address themes of human flourishing, equality, values and again touches on rights. It focuses mainly on, and details, the potential benefits and how these are distributed. These can be slotted into the frameworks already set out above.
Negative consequences / Crossing the ‘line’
An AI arms race should be permitted (1)
Identify and address cybersecurity risks (8)
Confronting the power to destroy (6)
robotethics.co.uk comment on the negative consequences of AI: The main threats are set out to be in relation to weapons, cyber-security and the existential risks posed by AIs that cease to be controlled by human agency. There are also many more subtle and shorter term risks such as bias in models and decision making addressed elsewhere. As with benefits, these can be slotted into the frameworks already set out above.
Consider the marginal user (9)
Collaborate with humans – rather than displace them (5)
Marginal user and participation (10)
Address job displacement implications (8)
robotethics.co.uk comment on user: This is mainly about the social implications of AI and the risks to individuals in relation to jobs and becoming marginalised. These implications seem likely to arise in the short to medium term and given their potential scale, there seems a comparative paucity of attention being paid to them by governments, especially in the UK where Brexit dominates the political agenda. Little attempt seems to be being made to consider the significance of AI in relation to the more habitual political concerns of migration and trade.
AI researchers <-> policymakers (1)
Establish industry partnerships (9)
Responsibility of designers and builders for moral implications (1, 5)
Establish industry partnerships (9)
Culture of trust and transparency between researchers and developers (1)
Resist the ‘race’ – no more ‘move fast and break things’ mentality (1)
robotethics.co.uk comment on AI industry: The industry players that are building AI products and services have a pivotal role to play in their ethical development and deployment. In addition to design and manufacture, this affects education and training, regulation and monitoring of the development of AI systems, their marketing and constraints on their use. AI is likely to be used throughout the supply chain of other products and services and AI components will become increasingly integrated with each other into more and more powerful systems. The need to create policy, regulate, certify, train and licence the industry creating AI products and services needs to be addressed more urgently given the pace of technological development.
Engage – opening up such work to broader deliberation in an inclusive way (4)
Education and awareness of public (7)
Be alert to public perceptions (3)
robotethics.co.uk comment on public dialogue: At present, public debate on AI is often focussed on the activities of the big players and their high profile products such as Amazon Echo, Google Home, and Apple’s Siri. These give clues as to some of the ethical issues that require public attention, but there is a lot more AI development going on in the background. Given the potentially large and fast pace of societal impacts of AI, there needs to be greater public awareness and debate, not least so that society can be prepared and adjust other systems (such as taxation, benefits, universal income etc.) to absorb the impacts.
Representation of AI system, user interface design (10)
robotethics.co.uk comment on interface design: With AIs capable of machine learning they are developing knowledge and skills in similar ways to how people do, and just like people, they often cannot explain how they do things or arrive at some judgement or decision. The ways in which people and AIs will interface and interact is as complex a topic as how people interact with each other. Can we ever know what another person is really thinking or whether the image they present of themselves is accurate. If AIs become even half as complex as people, able to integrate knowledge and skills from many different sources, able to express (if not actually feel) emotions, able to reason with super-human logic, able to communicate instantaneously with other AIs, there is no knowing how people and AIs will ‘interface’. Just as with computers that have become both tools for people to use and constraints on human activity (‘I’m sorry but the computer will not let me do that’) the relationships will be complex, especially as computer components become implanted in the human body and not just carried on the wrist. It seems more likely that the relationship will be cooperative rather than competitive or one in which AIs come to dominate.
The original source material from Josie, (who gave me permission to reference this material) can be found at:
See other work by Josie Young: https://methods.co.uk/blog/different-ai-terms-actually-mean/
A Response Submitted for robotethics.co.uk
A summary of the IEEE document Ethically Aligned Design (Version 2) can be found below. Responses to this document were invited by 7th May 2018.
Response to Ethically Aligned Design Version 2 (EADv2)
Rod Rivers, Socio-Technical Systems, Cambridge, UK
March 2018 (firstname.lastname@example.org)
I take a perspective from philosophy, phenomenology and psychology and attempt to inject thoughts from these disciplines.
Social Sciences: EADv2 would benefit from more input from the social sciences. Many of the concepts discussed (e.g. norms, rights, obligations, wellbeing, values, affect, responsibility) have been extensively investigated and analysed within the social sciences (psychology, social psychology, sociology, anthropology, economics etc.). This knowledge could be more fully integrated into EAD. For example, the meaning of ‘development’ to refer to ‘child development’ or ‘moral development’ is not in the glossary.
Human Operating System: The first sentence in EADv2 establishes a perspective looking forward from the present, as use and impact of A/ISs ‘become pervasive’. An additional tack would be to look in more depth at human capability and human ethical self-regulation, and then ‘work backwards’ to fill the gap between current artificial A/IS capability and that of people. I refer to this as the ‘Human Operating System’ (HOS) approach, and suggest that EAD makes explicit, and endorses, exploration of the HOS approach to better appreciate the complexity (and deficiencies) of human cognitive, emotional, physiological and behavioural functions.
Phenomenology: A/ISs can be distinguished from other artefacts because they have the potential to reflect and reason, not just on their own computational processes, but also on the behaviours, and cognitive processes of people. This is what psychologists refer to as ‘theory of mind’ – the capability to reason and speculate on the states of knowledge and intentions of others. Theory of mind can be addressed using a phenomenological approach that attempts to describe, understand and explain from the fully integrated subjective perspective of the agent. Traditional engineering and scientific approaches tend to objectify, separate out elements into component parts, and understand parts in isolation before addressing their integration. I suggest that EAD includes and endorses exploration of a phenomenological approach to complement the engineering approach.
Ontology, epistemology and belief: EADv2 includes the statement “We can assume that lying and deception will be prohibited actions in many contexts” (EADv2 p.45). This example may indicate the danger of slipping into an absolutist approach to the concept of ‘truth’. For example, it is easy to assume that there is only one truth and that the sensory representations, data and results of information processing by an A/IS necessarily constitute an objective ‘truth’. Post-modern constructivist thinking see ‘truth’ as an attribute of the agent (albeit constrained by an objective reality) rather than as an attribute of states of the world. The validity of a proposition is often re-defined in real time as the intentions of agents change. It is important to establish some clarity over these types of epistemological issues, not least in the realm of ethical judgments. I suggest that EAD note and encourage greater consideration of these epistemological issues.
Embodiment, empathy and vulnerability: It has been argued that ethical judgements are rooted in physiological states (e.g. emotional reactions to events), empathy and the experience of vulnerability (i.e. exposure to pain and suffering). EADv2 does not currently explicitly set out how ethical judgements can be made by an A/IS in the absence of these human subjective states. Although EAD mentions emotions and affective computing (and an affective computing committee) this is almost always in relation to human emotions. The more philosophical question of judgement without physical embodiment, physiological states, emotions, and a subjective understanding of vulnerability is not addressed.
Terminology / Language / Glossary: In considering ethics we are moving from amoral mechanistic understanding of cause and effect to value-laden, intention driven notions of causality. This requires inclusion of more mentalistic terminology. The glossary should reflect this and could form the basis of a language for the expression of ideas that transcend both artificial and human intelligent systems (i.e. that is substrate independent). In a fuller response, I discuss terms already used in EADv2 (e.g. autonomous, intelligent, system, ethics, intention formation, independent reasoning, learning, decision-making, principles, norms etc.), and terms that are either not used or might be elaborated (e.g. umwelt, ontology, epistemology, similarity, truth-value, belief, decision, intention, justification, mind, power, the will).
IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems
For Public Discussion – By 7th May 2018 (consultation now closed)
Version 2 of this report is available by registering at:
Public comment on version 1 of this document was invited by March 2017 to encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. The document was created by committees of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, comprised of over one hundred global thought leaders and experts in artificial intelligence, ethics, and related issues.
Version 2 presents the following principles/recommendations:
Candidate Recommendation 1 – Human Rights
To best honor human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.
Candidate Recommendation 2 – Prioritizing Wellbeing
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.
Candidate Recommendation 3 – Accountability
To best address issues of responsibility and accountability:
1. Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
3. Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/owners of A/IS should register key, high-level parameters, including:
• Intended use
• Training data/training environment (if applicable)
• Sensors/real world data sources
• Process graphs
• Model features (at various levels)
• User interfaces
• Optimization goal/loss function/reward function
Standard Reference for Version 2
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.
Report, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017, December 2017, 136 pages
Positioning theory illuminates our understanding of rights, duties, expectations and vulnerabilities. It addresses the dynamics of power and control and is a potent tool for understanding the self, the individual in the context of others, relationships, and social institutions. It even transcends the distinction between people and objects and has profound implications for the development of artificial intelligence (AI).
Positioning and technology
It is already becoming apparent that any computer algorithm (whether or not it is based on AI) is not neutral with respect to position. An algorithm that scores my credit worthiness, for example, can have significant impact on my life even though it may be only using a small sample of indicators in making its judgment. These, for example, might include debts that I dispute and might exclude a long-term history of credit and trustworthiness. The algorithm takes its position from a particular set of indicators that constitutes ‘its world’ of understanding. However I might easily question that it has used a biased training set or is not looking at the right things, or that it is quite likely using that information in a misleading way. And like any set of metrics, they can be manipulated once you know the algorithm.
There are algorithms that are explicitly programmed into the software on various decision-making systems but when it comes to more advanced technology based on machine learning, it is also already apparent that we are building into our artificially intelligent devices all kinds of default positions without even realizing it. So, if an AI programme selects staff for interview on the basis of data across which it has run its machine learning algorithms, it will simply replicate biases that are deeply entrenched but that go unquestioned. For example it might build in biases against gender, race or many other factors that we might call into question if they were explicit.
Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes
As we develop artificial intelligences in all sorts of situations and in many different manifestations from credit rating algorithms to robots we can easily embed positions that that cause harm. Sometimes this will be unwittingly and sometimes it will be deliberate.
Where do you stand?
Are you sitting down? Maybe you are in London, or Paris or Malaga. And maybe it’s 4pm on Saturday 11th November 2017 where you are. So, that locates you (or rather me) in place and time. And in exactly the same way, you can also be ‘positioned’ with respect to your attitudes and opinions. Are you to ‘the right’ or to ‘the left’, for example.
Positioning theory can help you understand where you are, and it’s not just ‘left’ or ‘right’. Pretty well every word you say and every action you take, creates a ‘position’. Read on to see how you cannot avoid taking positions and how positions confer rights and responsibilities on you and others, reveal vulnerabilities and determine the power relationships between us. Even objects, both natural and the ones we create have positions, both in the sense of where they are located, but also in the way they affect your actions. Re-thinking the world from the point of view of positioning theory can be a revelation.
Part of the appeal of positioning theory is that it is easy to understand, and it is easy to understand because it builds on a basic psychological process that we use all the time. This is the process of navigating around a space.
Youtube Video, Spatial Navigation – Neil Burgess, Serious Science, December 2016, 12:41 minutes
Positioning theory can be applied to all sorts of things. It can be used between individuals to help understand each other and resolve differences. It can be used in organisations to help effect organisational change. It can be used by therapists to help families understand and adjust the way they think about the main influences in their lives, and help alter their circumstances. It can be used in international relations to help nations and cultures understand each other and resolve their differences. It can also be used manipulatively to sell you things you didn’t want and to restrict your freedom, even without you being consciously aware of it.
In one sense, positioning theory is such a simple idea that it can seem obvious. It can be thought of as ‘the position you take on a particular issue’. For example, you may take the position on animal rights, that an animal has the same right to live as a person. But positions need not be so grand or political. You might take the position that two sugars are too many to have in tea or that it’s better not to walk on the cracks between stones on the pavement. Even ascribing attributes to people or objects is to take a position. So to say that somebody is ‘kind’ or ‘annoyed’ is to take a position about how to interpret their behaviour.
What is common to positions is that they derive from some kind of evaluation within a context of beliefs and they can influence action (or at least the propensity for action). Label someone as ‘violent’ or ‘stupid’, for example, and you may easily affect other people’s behaviours with respect to that person, as well as your own. And, of course, if that person is aware of the label, they may well live up to it.
Despite being a simple concept, positions are so pervasive and integrated into thought, language, dialogue, actions and everyday life that it is only relatively recently (i.e. in the post-modern era from about the mid 20th century onwards) that ‘positioning theory’ has emerged and ‘positions’ have become identified as having explicit ‘identities’ in their own right. However, this understanding of positioning has had impact all the way across the social sciences.
Youtube Video, Rom Harré Positioning Theory Symposium Bruges, July 2015, 1:06:27 hours
If I take the position that ‘people should not be allowed to carry guns’, I am placing myself at a particular location on a line or scale, the other end of which is that ‘people should be allowed to carry guns’. The extremes of this scale might be ‘people should never under any circumstances be allowed to carry guns’ and ‘people should always under all circumstances be allowed to carry guns’.
Should not be allowed – – – – – – – – – – – – – – – – – Should be allowed
. . . . . . . . . . . . . . . . . . . . . . .^. . . . . . . . . . . . . .
. . . . . . . . . . . . A possible position . . . . . . . . . . . . . . .
Once you start to assess particular circumstances, then you are taking intermediate points along the scale. Thinking of positions as lines, or scales, along which you can locate yourself and others, and potentially travel from place to place, helps everybody understand where they are and where others are coming from.
Positioning the self
It can be argued that the notion of ‘the self’ is no more than the set of positions you take up, and that these positions define your identity in a psychological sense. You can imagine a long list of attitude scales with your position marked on each, and that the overall profile (where you are located on each scale) defines your unique identity. However, it’s not so simple. You are more than your attitudes. Your physical characteristics, your genetics, your background, your memories and experiences, your skills etc., will all influence your attitudes, but are distinct from them. Also, your attitudes are not fixed. They change over time and they change in response to circumstances.
Roles, the self, and identity
There is another closely related sense of ‘position’ that goes beyond your own opinions on particular issues. This is how others (and you) position yourself in society. This is looking at you from the outside in rather than the inside out.
A position is not just where you are located on some dimension but it also implies a set of rights, responsibilities and expectations. A child is in a different position to an adult. A doctor is in a different position to a teacher or a taxi driver. We have different expectation of them, accord them different privileges and hold them to account according to different criteria.
Work roles are often known as ‘positions’. You apply for a position. Roles, like that of teacher, policeman, nurse, supervisor, colleague, judge or dogsbody can be seen as sets of rights, duties and expectations that are externally validated – i.e. that are commonly agreed amongst people other than (and including) the occupier of the role. Roles like parent and friend also confer rights and responsibilities. When you position somebody as ‘a friend’ you confer on them the right to know more about you than other people do and the duty to act in your best interests.
BBC Radio 4, In Our Time: Friendship, March 2006, 41:42 minutes
Our relationships with different people position us in different ways. To one person we may be a pupil while to another a teacher, to one a benefactor while to another a dependent, to one a comedian while to another deadly serious. As we move in and out of different social situations these different facets to our identities come to the fore or recede. It is as if our bodies contain multiple versions of the self, each triggered by particular circumstances or situations.
When you position your own rights and duties you are helping to define your ‘self’. If you believe that it is your duty to help the homeless or volunteer for unpaid jobs in the community, you are defining yourself as a particular type of public-spirited person. If you believe that you have the ‘right’ to use fear and violence to control others and justify this in terms of your duty to your country then, like Hitler or Assad, you are again defining your ‘self’.
We can distinguish between the list of rights and duties that are self-defined and part is that defined by others. In childhood your role is largely defined by others – in the family, in school etc. As you move into adulthood you increasingly define your own positions. In principle, you have control over how you define yourself, but in practice it is very hard to do this independently of the expectations of others and how they position you. Significant mismatches between your own definitions, and those of others, creates tension and even more significant stresses occur when there is a mismatch between your own perceptions of yourself and what you feel they ought to be.
One or many selves?
Our naïve assumption is that we have just one identity in the same way as we have only one body (and even that is constantly being renewed such that ever cell in our bodies may be different from what it was a few years before). In fact, it is difficult to identify what remains constant over a lifetime.
Youtube Video, Personal Identity: Crash Course Philosophy #19, CrashCourse, June 2016, 8:32 minutes
But just as we can have multiple roles we simultaneously maintain multiple identities. You may, for example, find yourself carrying on some internal debate (an inner dialogue) when making a decision. Take a difficult decision, like buying a car, moving house or making a career move. It is as if each ‘self’ is arguing the case for each option. It adopts a particular position (or set of positions) then loosely keeps track of it’s pre-requisites and implications. It can then engage in dialogue with, and be influenced by, other positions
A: I want the red sports car
B: It’ll be expensive to run
A: But it would be worth it if …
C: What kind of fool are you wanting a sports car at your age
This suggests that, not only do you change in response to circumstances, constantly re-configuring your positions to adjust to various pressures and concerns, but opens the possibility that you are made up on many different ‘selves’ all vying to have their voices heard.
Youtube Video, What Causes The Voice In Your Head?, Thoughty2, August 2015, 6:57 minutes
And even those selves are not simply stored on the shelf waiting to be activated, but are constructed on the fly to meet the needs of a constant stream of changing circumstances and discussion with the other ‘selves’.
BBC Radio 4, The Human Zoo: The Improvising Mind, June 2015, 27:37 minutes
These selves are normally related to each other. They may be constructed from the same ‘materials’ but they can be seen as distinct and to make up the ‘society of minds’ as discussed in the blog posting ‘Policy regulates behaviour’ in the section ‘who is shouting the loudest’.
Maintaining consistency of the self
Normally we seek to be consistent across our positions and this provides some stability. We strive to be internally consistent in our own view of the world and form small systems of belief that are mutually self-supporting.
Some systems of belief are easy to maintain because they are closely tied to our observations about reality. If I believe it’s night-time then I will expect it to be dark-outside, the clock to read a certain time, certain people to be present or absent, particular programmes to be on TV or radio and so on. This belief system is easy to maintain because we can expect all the observable evidence to point in much the same direction.
Beliefs about the self, by contrast are rather more fragile but nevertheless still require consistency.
Without consistency there is no stability and without stability there is unpredictability and chaos.
You need to know where you are – your position, in order to function effectively and achieve your intentions (See Knowledge is Power to Control). Maintaining a consistent model of yourself is, therefore, something of a priority. This is why we spend a good deal of or mental energy spotting inconsistencies and anomalies in the positions we take and finding ways of correcting them.
Much of what drives us as individuals is the mismatch between how we position ourselves and how we believe others, and ourselves, position us in terms of our duties, rights and expectations. We are constantly monitoring and evaluating how our own feelings, thoughts and behaviours align, and how these align with what we believe other people feel, think and behave in relation to us. If somebody unexpectedly slights us (or gives us an unexpected gift) we cannot help looking for an explanation – i.e. aligning our own belief system with what we believe others are doing and thinking. This is not necessarily to say that we are very good at getting it right and there are a whole host of ways in which we achieve alignment on spurious grounds. These are cognitive biases and their discussion underpins much of what is written in these blog postings.
Youtube Video, Identity and Positioning Theory, rx scabin, January 2013, 7:56 minutes
Re-writing the self
The world is not a totally predictable place, and more so the people we encounter in our lives. As a consequence, we are constantly creating and re-writing the story-line of our own lives in the light of changes in how we position ourselves with respect to others, and how we want or feel we ought to be positioned (see ‘The Story of Your Life’).
Although inconsistency tends to create tension and a drive to minimise it, this is something of a thankless and never-ending task. However much we work at it, there is always a new interpretation that seems to be more satisfactory if we care to look for it. Either new ‘evidence’ appears that we need to account for or we may see a new way of looking at things that makes more sense than a previous interpretation.
In a classic experiment in psychology (Festinger et al 1959) students were given either $1 or $20 to lie to other students about how interesting a task was. They were then asked about their own attitude to the task. Contrary to what you might expect, the students who were paid only $1 to lie, had a more positive attitude to the task. Festinger explains this in terms of maintaining consistency between the lie and ones own attitude when not receiving sufficient payment to lie.
Youtube Video, Testing cognitive consistency, PSYC1030x Series – Introduction to Developmental, Social & Clinical Psychology, April 2017, 3:28 minutes
The blog post called ‘It’s like this’ introduced the work of the psychologists George Kelly who set out the theory of personal constructs. Kelly uses the notion of constructs to explain how people develop the way in which they ‘see’ the world. Kelly’s personal construct theory provides a more common-sense and accessible way of understanding some of the ideas of ‘constructivism’ than some of the later, more obscure post-modernist accounts based in linguistics. George Kelly developed his theory of ‘personal constructs’ way back in 1955 and explores this view of people as ‘personal scientists’, constantly trying to make sense of the world through observation, theory and experiment.
Youtube Video, PCP film 1 Personal Construct Psychology and Qualitative Grids, Harry Procter, September 2014, 28:53 minutes
In a volatile, uncertain, complex and ambiguous (VUCA) world (see: https://www.tothepointatwork.com/article/vuca-world/ ) it is impossible to keep up with aligning ones own positions with what we experience. This is true on the macro scale (the world at large) and in respect of every small detail (i.e. ‘positioning’ what even one other person thinks of you at any given moment).
Thankfully, to some extent, there is stability. Much of the world stays much the same from moment to moment and even from year to year. Stability means predictability, and when there is predictability we formulate routines and habits of thinking (e.g. Khaneman’s system 1 thinking). Routines of thought and behaviour require little mental effort to maintain. We are often reluctant to move away from established habits because apart from providing some degree of security and predictability, it takes effort to change. In the same way, there is inertia to changing ones position on some issue and we will tend to defend it, even in the face of strong evidence to the contrary.
This is particularly true in relation to the ‘self’. We don’t like to admit we are wrong to others or to ourselves. We don’t like to be accused of being inconsistent. The confirmation bias is particularly strong leading us to seek evidence in support of our view and ignore evidence that does not fit. If something happens that forces us to change our world view – we lose our job, a relationship ends, or we lose a court case – then the consequences can cause a great deal of psychological pain as we are forced to change positions on a wide range of issues, particularly those related to our own self-perception and evaluation of our self-worth.
Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011,4:31 minutes
Where there isn’t stability, fortunately we can live with a great deal of ambiguity and uncertainty. Our tolerance of ambiguity and uncertainty enables us to hold some quite inconsistent, anomalous or hypocritical positions without being unduly concerns and indeed often without even being particularly aware of it. This is partly because our positions are not necessarily fixed or even known.
We can easily construct new positions, especially when we are trying to justify an emotional reaction to something. This is another example of trying to minimise dissonance and inconsistency but this time the difference we seek to minimise is between our emotional reaction and our reasoning mind. We have similar problems keeping our behaviours consistent with our emotions and thoughts.
We can construct positions ‘on the fly’ to suit circumstances and support the courses of action we want to take. We can post-rationalise to support courses of action we have taken in the past. We can toy with positions and say ‘what would it be like if’ I took such and such a position, just to see how we feel about it. In fact, each one of our multiple selves, so to speak, can construct quite elaborate scenarios in our ‘minds eye(s)’, together with related thoughts, plans and even feelings.
It is in the nature of the human condition that we ‘duck and dive’ and those that are best able to duck and dive tend to be those that can achieve their goals most successfully. Tolerance of ambiguity, and the ability to quickly evaluate and adjust to new circumstances and interpretations, is a great virtue from the point of view of survival.
Furthermore, our realities are socially constructed. We learn from others what to note and what to ignore. Our friends, families, organisations, the media and culture shape what we perceive, interpret and how we act in response. It is our social environment that largely determines the ‘positions’ that we see as being available and the ones that we choose for ourselves.
YouTube Video, positioning theory in systemic family therapy, CMNtraining, July 2015, 33:10 minutes
Positioning and epistemology
Although we strive for consistency between our beliefs and what we take to be external reality, in fact the relationship can be quire tenuous. You might, for example, take the position that you are a good driver. Then, one day you have an accident, but instead of changing your belief about yourself, you believe that it was the fault of another driver. A second accident is blamed on the weather. A third is put down to poor lighting on the road, and a fourth to road-works. You have now built up a mini system of beliefs about the factors that lead to accidents that could, more than likely and more easily, be explained by you being a poor driver. It is not until one day you are charged with careless driving, that you finally have to revise this somewhat fragile belief system.
Belief systems can be a bit like bubbles. They can grow larger as more supporting beliefs are brought in to sustain an original position. Then other beliefs must prop up the supporting beliefs and so on. If the original position had no support in reality the whole system will be fragile, eventually become unsustainable and burst leaving nothing in its place.
In contrast, belief systems that have a stronger foundation do not need propping up with other fragile positions. Each position can stand-alone and therefore can re-enforce others it is related to. This results in a stable belief system with mutually re-enforcing positions and even if one falls away, the rest of the structure will still stand.
Having said that, no belief system is invulnerable. Even the most ‘solid’ of belief systems such as Einsteinian physics came under attack from quantum mechanics. The way in which science progresses is not by gradual accumulation of ‘facts’ but, as Kuhn observed in 1963, is more to do with the eventual replacement of established paradigms by new ones, only once the old paradigm becomes completely unsustainable. This model probably also applies to the belief systems of individuals where established systems are clung to for as long as can be sustained.
As we move towards the development of artificial intelligence and robots it will be increasingly necessary to understand the logic and mathematics of belief systems.
Noah Friedkin (2017) has developed a mathematical model of how belief systems change in response to new evidence and illustrates this with how beliefs change amongst the US public as new evidence emerged in relation to the Iraq war. http://www.uvm.edu/~cdanfort/csc-reading-group/friedkin-science-2016.pdf
How will robots build their belief systems and change them in the light of evidence? This is one of the issues examined at www.robotethics.co.uk .
The naive theory of knowledge is that our knowledge and perceptions are simply a mirror of reality. However, the more we think about it the more we understand that what we take to be reality is very much a function of how we perceive and interpret it.
Some animals, for example, can detect frequencies of sound and light that people cannot detect – they literally live in a different world. In the same way, one person may ‘see’ or ‘interpret’ what they perceive very differently from another person. They are ‘sensitive’ to quite different features of the world. A skilled football player may see a foul or an offside, while to a non-player ignorant of the rules just sees the players aimlessly kicking a ball around. Also, we are highly selective in what we attend to. When we look at a clock, we see the time, but afterwards often cannot say whether the clock had numbers or marks, let alone the colour of the clock-face. In the post-modern view of the world, reality is ‘constructed’ from the meaning we actively seek and project onto it, rather than what we passively receive or record.
Positioning in relationships
The term ‘relationship’ can include personal relationships, work relationships, relationships between organisations, relationships between the citizen and the state, relationships between countries and many more. It can be easier to think in terms of personal relationships first and then go on to apply the principles to other types of relationship.
Relationships reveal an important characteristic of positioning. If I take one position, I may necessarily force you into another, whether you like it or not. So, if I take the position that you are ‘lazy’, for example, then you either have to agree or challenge that position. One way or another, ‘laziness’ has become a construct within our relationship and it is quite difficult to dismiss or ignore it, especially if you are frequently being labelled as ‘lazy’ at every opportunity. It is quite difficult to live with another person’s positioning of you that you do not agree with yourself. It is a kind of assault on your own judgement. We not only seek internal consistency but also consistency with others’ perceptions. Any dissonance or discrepancy will create some tension that motivates a desire to resolve it.
There is also a more subtle form of positioning. This is not necessarily positioning with respect to a particular issue but a more general sense in which you relate to a particular person, society in general, a job, or indeed more or less anything you care to think about. Are you close to or distant from it; behind it or ahead or it; on top of it or is it on top of you? Here positioning is being used as a metaphor for how you generally relate to something.
Youtube Video, Relationship Position – Metaphors with Andrew T. Austin, Andrew T. Austin, July 2012, 13:34 minutes
Another interesting idea, related to spatial positioning in relationships, is Lewin’s Force/Field Theory in which the forces ‘for’ and ‘again’ some change are assessed. If, for example, in some relationship there are some attracting forces and some repelling forces (an approach/avoidance conflict) then a party to the relationship may find some optimal distance between them where the forces balance. If the other party has a different optimal distance at any point in time, then we leave Lewin’s theory and are into negotiation.
Youtube Video, Lewin, headlessprofessor, November 2015, 5:35 minutes
In a relationship, one way of resolving a discrepancy is to argue the case. So I may argue that I am not lazy and support this by evidence – ‘look at all the things I do…’. Another way is to ‘live up to’ the positioning. So, if you describe me as ‘kind’, I may start to exhibit kinder behaviours, and if you position me as ‘lazy’ I may be more likely to stay on the couch. Both ‘arguing the case’ and ‘living up to’ can be seen as an attempt to resolve a positioning discrepancy – to seek consistency and to simplify.
However, people being intelligent as they are, can often predict or at least guess at each others positions, and can then use this knowledge to alter their own actual or declared positions. A positive use of this would be to act in a way that supports or cooperates with another person’s position. So, if I can guess that you would not want to go out to see some of our friends tonight, to save time or argument I might say that I don’t want to go out either, even if I would want to.
Sometimes this will involve negotiation. We may not want the same things but we may well be prepared to ‘trade’. A good trade is where both parties can change their position on something that costs little to them but gives a lot of value to the other. I might say ‘if we go out we can get a take-away’ on the way back, knowing that this will make the proposition more attractive to you.
Alternatively, I might exaggerate the extent to which I think going would be a good thing, knowing that we might then settle on going for a short time (which is what I really want). However, once we get into this type of ‘hidden’ positioning everything starts to get more complicated as we both try to out-guess what the other really wants. Trades are made considerably easier if both parties trust each other to be honest about their own costs and values. Negotiation will start to get difficult as soon as one or both parties hide information about cost and value in an attempt to seek advantage (e.g. by pretending that something is of no value to them when it is).
It is useful to distinguish between one’s ‘position’ and one’s ‘interest’. One’s position in a negotiation is what you say publicly to the other party, whereas one’s interest is often hidden. One’s interest can be thought about as the reason you hold a particular position. This reason may be ‘because I want to get the most for myself out of this transaction as possible’, and we often (sometimes unjustifiably) attribute this interest to the other party. But as often as not the reason may be quite different. It might even be that the other party wants you to gain as much as possible out of the transaction, but your natural suspicion precludes you seeing their genuine motive. Equally, the other person’s interest might be nothing to do with how much each gets out of it. You may not want to go out because it’s cold outside. If this ‘interest’ is revealed it can open up new solutions – ‘we can take the car’ (when the default assumption was that we would walk).
Youtube video, Interests and Positions in Negotiation – Noam Ebner with Vanessa Seyman, Noam Ebner, February 2015, 15:03 minutes
Although it is possible to hold hidden positions to seek advantage or manipulatively, much of what goes on in negotiation is more to do with understanding our own and an other’s interests. A lot of the time we only have vague ideas about where we stand in our interests and positions. We have even less information about where somebody else stands. We need to test possible positions as they apply to particular circumstances before we can make up our minds. I need to say ‘let’s go out tonight’ before I know how either you are I will actually feel about it, and as we debate the various factors that will influence the decision we may both sway around from one position to another, also taking the other’s reactions and uncertain interests and position into account, before settling on our own position, let alone a mutual one.
However, through the informal use of positioning theory in everyday life, we can identify and make explicit what the various dimensions are. We can reveal where each party places themselves and the other on these dimensions and where the differences lie. This takes an important step towards arriving at decisions that we can agree on.
Positioning, power and control
The rights and responsibilities that we confer on each other, and accept for ourselves, determine the power relationships between us. Studies of power amongst college students in the US suggest that power is granted to individuals by others rather than grabbed. Certain people are positioned by others to rise in the social hierarchy because they are seen to benefit a social group.
Youtube video, Social Science & Power Dynamics | UC Berkeley Executive Education, berkeleyexeced, May 2016, 3:43 minutes
Donald Trumps rise to power can be read within this framework as power granted by the manoeuvrings of the republican party in its candidate selection process and the growing group of economically disenfranchised workers in the US. Similarly, the rise to prominence of the UKIP party in the UK can be read as having followed a similar pattern.
In the power relationships between individuals, often very little is spelt out, and rights and duties between individuals can be in constant flux. In principle it is possible to formalise the positions of the parties in a relationship in a contract. Marriage is a high level contract, the terms of which have been ‘normalised’ by society, mainly in the interest of maintaining the stability and hence the predictability of the social structure. Many of the detailed terms are left undefined and are themselves a matter for negotiation, as the need arises, such that the structures holding the relationship between two individuals together can flex a great deal. Many of the terms are implied by social convention within the immediate culture and circumstances of the parties. Some terms may be explicitly discussed and negotiated, especially when one party feels there has been a breach on the part of the other party. As people and circumstances change, terms may be re-negotiated. It may take major, repeated or pro-longed imbalances or breaches of implied terms to break the ‘contract’.
For example, if your position is that I have a duty to supply you with your dinner when you come home from work, and I accept that you have a right to that position, then we have established a power relationship between us. If I do not cook your dinner one evening then your right has been breached and you may take the position that you have a right of redress. Perhaps it has created an obligation that I should do something that provides an equivalent value to you, and in that sense, I am in your debt. Or perhaps it gives you the right to complain. Alternatively, I may take the position that while you have that right in general, I have the right to a night off once in a while, and then we may be into a negotiation about how much notice I may be expected to give, how often and so on. The rights and duties with respect to cooking dinner will be just one of many terms in the implied contract between us. It may be that I accept your right to be provided with dinner on the basis that you pay for the food. And this is only the start. There may be a long and complicated list of implied terms, understood circumstances of breach and possible remedies to rectify breaches.
Ultimately, to maintain the relationship we must both expect to have a net benefit over the longer term. We may be prepared to concede positions in the short term either by negotiating for something else of immediate value, or the value may be deferred as a form of social ‘debt’ with the confidence and expectation that the books will be balanced one day. However, the precise balancing of the books doesn’t matter much so long as the relationship confers a net benefit.
Coercive, financial and other forms of power
Descriptions of power in terms of rights, duties, laws, and social norms refer to the type of power we are used to in democratic society. In some relationships the flexibility to change or negotiate a change in position is severely limited. An authoritarian state may maintain power using the police and the army. An authoritarian person, narcissist or psychopath will also demonstrate an inflexibility over positioning. The authoritarian state or individual may use coercive power. The narcissist and the psychopath may have difficulty in empathising with another person’s position.
Wealth also confers power. People and organisations can be paid to take up particular positions – both in the sense of jobs or in the sense of attitudes. Pay for a marketing campaign and you can change people’s positions on whether they will buy something or vote some way. In modern market-based societies, wealth is legitimised as an acceptable way of granting power to people and organisations that are seen to confer benefit on society. However, wealth can easily be used both subtilely and coercively to change people’s positions to align with value systems that are not their own.
Youtube video, How to understand power – Eric Liu, TED-Ed, November 2014, 7:01 minutes
Positioning in organisations
Just as important are the positions taken within an organisation and the various dilemmas and tensions that these reveal. For example, for most organisations there is a constant tension between quality and cost. Some parts of the organisation will be striving to keep costs down while others are striving to maintain quality. Exactly how this plays out, and how it matches to the demand in the market, may determine a products success or failure. The National Health Service (NHS) in the UK is a classic example of a publicly funded organisation that is in a constant struggle to maintain quality standards within cost constraints.
Different parts of the organisation will take different positions on the importance of various stakeholders. The board may be concerned about shareholder value, the management concerned to satisfy customers and the workers concerned for the welfare of the staff. The R&D department may be more concerned about innovation and the sales force more concerned about the success of the current product lines. Again, by making explicit the positions of each group, it is possible to identify differences, debate the trade-offs and more readily arrive at policies and actions that are agreed to serve their mutual interests. Where tensions cannot be resolved at lower levels in the organisation, they can become the concern of the executive (see ‘The Executive Function’).
Another type of positioning has an important role to play in organisations. A commercial company may spend a lot of effort identifying and maintaining its brand and market positioning. This is its position with respect to its competitors and it’s customers, and helps define its unique selling points (USPs).
Youtube Video, Marketing: Segmentation – Targeting – Positioning, tutor2u, April 2016, 4:08 minutes
‘Don’t ask for permission ask for forgiveness’ is a mantra chanted by people and companies that put a premium on innovation. How we each act is not only determined by what we can do. It is a matter of both what we can do and what we are permitted to do. We can be ‘permitted by others’ that confer on us the right to do it, and by the rights we confer on ourselves. If we seek forgiveness rather than permission we are conferring on ourselves the right to take risks then respond to the errors we make that cross the boundaries of the rights and duties other people confer on us.
We live in a competitive social world where we may have some choice over our trades in rights and duties. If I take the position that employer X is not paying me enough for the job I do, I can potential go to employer Y instead. However, there are costs and uncertainties in switching that make social systems relatively stable. The distribution of power is therefore constrained to some extent by the ‘free market’ in the trading of rights and duties.
Positioning in language and culture
Positioning can involve ascribing attributes to people (e.g. she is strong, he is kind etc.).
Every time you label something you are taking a position.
Linguistic labels can have a powerful influence within a culture because they can come heavily laden with expectations about rights and responsibilities. Ascribing the attribute ‘disabled’ or ‘migrant’, for example, may confer rights to benefits, and may confer a duty on others to help the vulnerable overcome their difficulties. Ascribing the attribute ‘female’, until relatively recently, assigned different legal rights and duties to the attribute ‘male’. However, the positioning can extend far beyond legal rights and duties to a whole range of less explicit rights and duties that can be instrumental in determining power relationships.
It is not always appreciated that the labels put on people, positions them to such an extent with respect to both explicit and implicit rights and duties, and it is easy to use labels without a full appreciation of the consequences. The labels we put on people are not isolated judgements or positions. Through leaned associations, they come in clusters. So to label somebody as ‘intelligent’ is also to imply that they are knowledgeable and reasonable. It even implies that there is a good chance that they will wear glasses. The label brings to mind a whole stereotype that may involve many detailed characteristics.
This is both useful and problematic. It is useful because it prepares us to expect certain things, and that saves us having to work out everything from detailed observation and first principles. It is a problem because no particular instance is likely to conform to the stereotype and there is a good chance that we will misinterpret their actions or intentions in particular situations. Particularly pernicious is when, through stereotyping, we position somebody along the dimensions ‘friend or foe’ (or ‘inferior – superior’) because of the numerous implications for the way in which we infer rights and duties from this, and hence how we behave in relation to them.
Particularly pernicious is when language is use to mislead. This is often the case in the language of politics and the language of advertising.
The terms used to describe a policy or product can create highly misleading expectations.
Youtube video, Language of Politics – Noam Chomsky, Serious Science, September 2014, 12:45 minutes
George Orwell in his book ‘1984’ understood only too well how language can be used to influence and constrain thought.
Youtube Video, George Orwell 1984 Newspeak, alawooz, June 2013, 23:08 minutes
Positions, rights and duties
Much of our conversation concerns categorising things and then either implicitly or explicitly ascribing rights and responsibilities. So we may gossip on the bus about whether a schoolmate is a bully, whether a person is having an affair or if someone is a good neighbour. In so doing we are making evaluations – or, in other words, taking positions.
The bully has no right to act as they do and confers on others the right to punish. Similarly, the person having an affair may be seen as neglecting a duty of fidelity and therefore also relinquishing rights. The good neighbour may be going beyond their duty and attracting the right of respect.
Between two individuals much discussion involves negotiation over rights and duties and what constitutes fair trade-offs, both in principle and in practice. If one person does something for another, an implicit principle of fairness through reciprocation, creates an obligation (a duty that can be deferred) to do something of equal value (but not necessarily at equal cost) in return. A perceived failure to perform a duty may create a storyline of victimisation in the mind of one party that the other party may be blissfully unaware of, unless conversation takes place to resolve it.
When you have a duty, it is generally to another person or organisation. Typically, you have a duty when you have the power to overcome another person’s vulnerability. So if a person is too short to reach something on a high shelf, and you are tall enough to reach it, people tend to believe that you have a duty to do so.
To claim a right is to admit a vulnerability and to assert that somebody with the power to address that vulnerability, will do so.
The right to a fair trial admits a vulnerability to the rushed judgement of the crowd (or the monarch), and confers a duty on the judicial system to protect you from this. The right to citizenship and healthcare admits to vulnerabilities with regard to security and health and confers a duty on the state to provide it.
Youtube video, What Are Rights? Duty & The Law | Philosophy Tube, Philosophy Tube, January 2016, 6:41 minutes
Positioning, ethics and morality
Psychologists Lawrence Kohlberg, in 1958, developed a test of moral reasoning and proposed a number of stages of development in being able to take moral positions. The higher the level the greater the ability to take into account a range of moral positions. A small child may focus on only one aspect of a moral problem. At later stages a person will take into account the positions of different interests – the family, the community, the law and so on. At stage 4 there is an understanding of social order. At stage 6 (a stage that very few people reach) a person is able to reason through a complete range of moral positions. Most adults operate at levels 3 or 4. Kohlberg methods have since been questioned and elaborated. One theory is that we act morally because of our emotional reactions to a situation and that moral reasoning is more of a social act when persuading other people. There are also cultural differences in the importance attributed to moral positions.
BBC Radio 4, Mind Changers: The Heinz Dilemma, September 2008, 27:32 minutes
Positioning in international relations
International relations are nearly always set within the context of multiple parties. Even when considering Arab / Israeli or US/Mexico there is a context that involves many other parties and positions are held in the light of alignments with close ‘allies’. In fact the context can be quite entangled and confusing as in the case of Syria (involving the Syrian regime, the Syrian people, the Islamic State, the Russians and the US as well as many other factions let alone international groupings such as the United Nations and charities). Most importantly any government or regime may have to square its position on the international stage with it’s position within its own country. All these factors considerably reduce the flexibility of re-positioning, except when circumstances configure in such a way that there is a window of opportunity.
Examples of international conflicts can be found at:
Often in international relations it is difficult to establish a parties true costs and true values because parties may hide or exaggerate these to seek a negotiation advantage. It is a matter of working out for each party where there is least rigidity on a set of relevant positions, defining small changes from one (set of) position to another and then working out how to present this change to different parties in terms of their own values, language and objectives.
Youtube video, Negotiations | Model Diplomacy, Council on Foreign Relations, November 2016, 4:57 minutes
It can be important to have a neutral or otherwise acceptable party present propositions or lead negotiations. In terms of how it will be received, the source of a communication can be more influential than the communication itself.
Separating out the underlying reality and logic of the positions from how they are presented and by whom is a first step in resolving conflict. However, throw in unpredictable factors, like a US president failing to follow any previous logic or process, and any such model can break down.
The hidden positions of designed objects and procedures
All artifacts contain embedded positions. So, a door handle embeds the position that it is ok to open the door and a microphone embeds the position that it is ok to record or amplify sound. Even more subtle, is that merely making one thing more readily available that another can embed a position. So, if there is a piano in a bar or a railway station, then it automatically raises the possibility that it may be ok to play it.
This characteristic of all objects and artifacts has a specific name within psychology. It is called ‘affordance’. The door handle affords opening and the piano in a public place affords playing.
Youtube video, Universal Design Principles 272 – Affordance, anna gustafsson, October 2014, 2:10 minutes
Looking at things from this point of view may be difficult to grasp but it has massive implications. These positions are, in one sense obvious, but in another they are difficult to see. They can be so obvious that they go unquestioned and are effectively hidden from scrutiny. They can easily be used to manipulate and exert power, without people being particularly aware of it.
There are several examples that apply to the design of procedures:
• A corporation, for example, may make it very easy for you to buy a service but difficult for you to discontinue it (e.g. subscriptions that automatically renew that require you to contact the right person with the right subscription information, both of which are hard to find, before you can cancel).
• A government may have you complete a long form and meet many requirements in order to claim benefits, while providing many small reasons for a benefit to be taken away.
• Another classic and more obvious example is how, in the UK, energy companies offer a range of time limited tariffs and switch you to a more expensive tariff at the end of the period, requiring you to make the effort to switch suppliers or pay substantially more (as much as 50% extra) for energy.
These subtle affordances are often just accepted or overlooked as just being ‘the way things work’, but when all one’s energy is taken up dealing with the trivia of everyday life, they turn out to be a powerful force that ‘keeps you in your place’ (whether or not they are deliberately designed to do so).
Positioning and narrative
An utterance in a conversation can mean entirely different things depending upon the context. So if I ask ‘Did you pass the paper?’ I will mean quite different things if I am referring to an incident where somebody left a newspaper on a train, to if we had been talking about a recent exam, to if we had been talking about a paper being considered by a committee.
The storyline is different in each case and the position I take in asking the question may also be different. My question may be simple curiosity, the expression of a hope or may determine my intention to act in a particular way, also depending on the context or storyline. In fact, my position will probably be unclear unless I explain it. It is more than likely that you will interpret it one way, in accordance with your theory about what’s going on, while I mean it a different way, according to my own. Furthermore we may never realise it and be quite surprised should we compare our accounts of the conversation at a later date.
Youtube Video, Positioning Theory, ScienceEdResearch, July 2017, 6:02 minutes
By contrast, the blog post called ‘It’s like this’ notes how ‘the single story’ (a fixed and commonly held interpretation or position) can trap whole groups of people into a particular way in which others see them and how they themselves see the world. One way or another ‘position’ has a powerful influence.
Positioning theory integrates
This tour around the many applications of ‘positioning theory’ shows how it integrates many of the concepts being put forward in this series of blog postings. It is a powerful tool for understanding the individual, the individual in the context of others, social institutions in relation to each other and institutions in relation to the individual. In its relation to rights and duties it addresses some of the dynamics of power and control. It even transcends the distinction between people and objects, and has profound implications for the development of artificial intelligence.
Post Truth and Trust
The term ‘post truth’ implies that there was once a time when the ‘truth’ was apparent or easy to establish. We can question whether such a time ever existed, and indeed the ‘truth’, even in science, is constantly changing as new discoveries are made. ‘Truth’, ‘Reality’ and ‘History’, it seems, are constantly being re-constructed to meet the needs of the moment. Philosophers have written extensively about the nature of truth and this is an entire branch of philosophy called ‘epistemology’. Indeed my own series of blogs starts with a posting called ‘It’s Like This’ that considers the foundation of our beliefs.
Nevertheless there is something behind the notion of ‘post truth’. It arises out of the large-scale manufacture and distribution of false news and information made possible by the internet and facilitated by the widespread use of social media. This combines with a disillusionment in relation to almost all types of authority including politicians, media, doctors, pharmaceutical companies, lawyers and the operation of law generally, global corporations, and almost any other centralised institution you care to think of. In a volatile, uncertain, changing and ambiguous world who or what is left that we can trust?
YouTube Video, Astroturf and manipulation of media messages | Sharyl Attkisson | TEDxUniversityofNevada, TEDx Talks, February 2015, 10:26 minutes
All this may have contributed to the popularism that has led to Brexit and Trump and can be said to threaten our systems of democracy. However, to paraphrase Churchill’s famous remark ‘democracy is the worst form of Government, except for all the others’. But, does the new generation of distributed and decentralising technologies provide a new model in which any citizen can transact with any other citizen, on any terms of their choosing, bypassing all systems of state regulation, whether they be democratic or not. Will democracy become redundant once power is fully devolved to the individual and individuals become fully accountable for their every action?
Trust is the crucial notion that underlies belief. We believe who we trust and we put our trust in the things we believe in. However, in a world where we experience so many differing and conflicting viewpoints, and we no longer unquestioningly accept any one authority, it becomes increasingly difficult to know what to trust and what to believe.
To trust something is to put your faith in it without necessarily having good evidence that it is worthy of trust. If I could be sure that you could deliver on a promise then I would not need to trust you. In religion, you put your trust in God on faith alone. You forsake the need for evidence altogether, or at least, your appeal is not to the sort of evidence that would stand up to scientific scrutiny or in a court of law.
Blockchain to the rescue
Blockchain is a decentralised technology for recording and validating transactions. It relies on computer networks to widely duplicate and cross-validate records. Records are visible to everybody providing total transparency. Like the internet it is highly distributed and resilient. It is a disruptive technology that has the potential to decentralise almost every transactional aspect of everyday life and replace third parties and central authorities.
YouTube Video, Block chain technology, GO-Science, January 2016, 5:14 minutes
Blockchain is often described as a ‘technology of trust’, but its relationship to trust is more subtle than first appears. Whilst Blockchain promises to solve the problem of trust, in a twist of irony, it does this by creating a kind of guarantee, and by creating the guarantee you no longer have to be concerned about trusting another party to a transaction because what you can trust is the Blockchain record of what you agreed. You can trust this record, because, once you understand how it works, it becomes apparent that the record is secure and cannot be changed, corrupted, denied or mis-represented.
Youtube Video, Blockchain 101 – A Visual Demo, Anders Brownworth, November 2016, 17:49 minutes
It has been argued that Blockchain is the next revolution in the internet, and indeed, is what the internet should have been based on all along. If, for example, we could trace the providence of every posting on Facebook, then, in principle, we would be able to determine its true source. There would no longer be doubt about whether or not the Russian’s hacked into the Democratic party computer systems because all access would be held in a publicly available, widely distributed, indelible record.
However, the words ‘in principle’ are crucial and gloss over the reality that Blockchain is just one of many building-blocks towards the guarantee of trustworthiness. What if the Russians paid a third-party in untraceable cash to hack into records or to create false news stories? What if A and B carry out a transaction but unknowing to A, B has stolen C’s identity? What if there are some transactions that are off the Blockchain record (e.g. the subsequent sale of an asset) – how do they get reconciled with what is on the record? What if somebody one day creates a method of bringing all computers to a halt or erasing all electronic records? What if somebody creates a method by which the provenance captured in a Blockchain record were so convoluted, complex and circular that it was impossible to resolve however much computing power was thrown at it?
I am not saying that Blockchain is no good. It seems to be an essential underlying component in the complicated world of trusting relationships. It can form the basis on which almost every aspect of life from communication, to finance, to law and to production can be distributed, potentially creating a fairer and more equitable world.
YouTube Video, The four pillars of a decentralized society | Johann Gevers | TEDxZug, TEDx Talks, July 2014, 16:12 minutes
Also, many organisations are working hard to try and validate what politicians and others say in public. These are worthy organisations and deserve our support. Here are just a couple:
Full Fact is an independent charity that, for example, checks the facts behind what politicians and other say on TV programmes like BBC Question Time. See: https://fullfact.org. You can donate to the charity at: https://fullfact.org/donate/
More or Less is a BBC Radio programme (over 300 episodes) that checks behind purported facts of all sorts (from political claims to ‘facts’ that we all take for granted without questioning them). http://www.bbc.co.uk/programmes/p00msxfl/episodes/player
However, even if ‘the facts’ can be reasonably established, there are two perspectives that undermine what may seem like a definitive answer to the question of trust. These are the perspectives of constructivism and intent.
Constructivism, intent, and the question of trust
From a constructivist perspective it is impossible to put a definitive meaning on any data. Meaning will always be an interpretation. You only need to look at what happens in a court of law to understand this. Whatever the evidence, however robust it is, it is always possible to argue that it can be interpreted in a different way. There is always another ‘take’ on it. The prosecution and the defence may present an entirely different interpretation of much the same evidence. As Tony Benn once said, ‘one man’s terrorist is another man’s freedom fighter’. It all depends on the perspective you take. Even a financial transaction can be read a different ways. While it’s existence may not be in dispute, it may be claimed that it took place as a result of coercion or error rather than freely entered into. The meaning of the data is not an attribute of the data itself. It is at least, in part, at attribute of the perceiver.
Furthermore, whatever is recorded in the data, it is impossible to be sure of the intent of the parties. Intent is subjective. It is sealed in the minds of the actors and inevitably has to be taken on trust. I may transfer the ownership of something to you knowing that it will harm you (for example a house or a car that, unknown to you, is unsafe or has unsustainable running costs). On the face of it the act may look benevolent whereas, in fact, the intent is to do harm (or vice versa).
Whilst for the most part we can take transactions at their face value, and it hardly makes sense to do anything else, the trust between the parties extends beyond the raw existence of the record of the transaction, and always will. This is not necessarily any different when an authority or intermediary is involved, although the presence of a third-party may have subtle effects on the nature of the trust between the parties.
Lastly, there is the pragmatic matter of adjudication and enforcement in the case of breaches to a contract. For instantaneous financial transactions there may be little possibility of breach in terms of delivery (i.e. the electronic payments are effected immediately and irrevocably). For other forms of contract though, the situation is not very different from non-Blockchain transactions. Although we may be able to put anything we like in a Blockchain contract – we could, for example, appoint a mutual friend as the adjudicator over a relationship contract, and empower family members to enforce it, we will still need the system of appeals and an enforcer of last resort.
I am not saying is that Blockchain is unnecessarily or unworkable, but I am saying that it is not the whole story and we need to maintain a healthy scepticism about everything. Nothing is certain.
Psychological experiments in Trust. Trust is more situational than we normally think. Whether we trust somebody often depends on situational cues such as appearance and mannerisms. Some cues are to do with how similar one persona feels to another. Cues can be used to ascribe moral intent to robots and other artificial agents.
YouTube Video, David DeSteno: “The Truth About Trust” | Talks at Google, Talks at Google, February 2014, 54:36 minutes
Trust is a dynamic process involving vulnerability and forgiveness and sometimes needs to be re-built.
YouTube Video, The Psychology of Trust | Anne Böckler-Raettig | TEDxFrankfurt, TEDx Talks, January 2017, 14:26 minutes
More than half the world lives in societies that document identity, financial transactions and asset ownership, but about 3 billion people do not have the advantages that the ability to prove identity and asset ownership confers. Blockchain and other distributed technologies can provide mechanisms that can directly service the documentation, reputational, transactional and contractual needs of everybody, without the intervention of nation states or other third parties.
YouTube Video, The future will be decentralized | Charles Hoskinson | TEDxBermuda, TEDx Talks, December 2014, 13:35 minutes