Home » Posts tagged 'Memory'
Tag Archives: Memory
Subscribe via Email
- – Ethical themes in artificial Intelligence and robotics
- – IEEE Consultation on Ethically Aligned Design
- – What’s your position? The place of positioning theory within AI development
- – Can we trust blockchain in an era of post truth?
- – Ways of knowing (HOS 4)
- – Executive function (HOS 3)
- – Managing demands (HOS 2)
- – Fragmented experience (HOS 1)
- – Are we free?
- – Representations of reality 2
- – Representations of reality 1
- – It’s like this
- – Content re-use
How do we know what we know?
This article considers:
(1) the ways we come to believe what we think we know
(2) the many issues with the validation of our beliefs
(3) the implications for building artificial intelligence and robots based on the human operating system.
I recently came across a video (on the site http://www.theoryofknowledge.net) that identified the following ‘ways of knowing’:
- Sensory perception
This list is mainly about mechanisms or processes by which an individual acquires knowledge. It could be supplemented by other processes, for example, ‘meditation’, ‘science’ or ‘history’, each of which provides its own set of approaches to generating new knowledge for both the individual and society as a whole. There are many difference ways in which we come to formulate beliefs and understand the world.
Youtube Video, Theory of Knowledge: Ways of Knowing, New College of Humanities, December 2014, 9:32 minutes
In the spirit of working towards a description of the ‘human operating system’, it is interesting to consider how a robot or other Artificial Intelligence (AI), that was ‘running’ the human operating system, would draw on its knowledge and beliefs in order to solve a problem (e.g. resolve some inconsistency in its beliefs). This forces us to operationalize the process and define the control mechanism more precisely. I will work through the above list of ‘ways of knowing’ and illustrate how each might be used.
Let’s say that the robot is about to go and do some work outside and, for a variety of reasons, needs to know what the weather is like (e.g. in deciding whether to wear protective clothing, or how suitable the ground is for sowing seeds or digging up for some construction work etc.) .
First it might consult its senses. It might attend to its visual input and note the patterns of light and dark, comparing this to known states and conclude that it was sunny. The absence of the familiar sound patterns (and smell) of rain might provide confirmation. The whole process of matching the pattern of data it is receiving through its multiple senses, with its store of known patterns, can be regarded as ‘intuitive’ because it is not a reasoning process as such. In the Khanemman sense of ‘system 1’ thinking, the robot just knows without having to perform any reasoning task.
Youtube Video, System 1 and System 2, Stoic Academy, February 2017, 1:26 minutes
The knowledge obtained from matching perception to memory can nevertheless be supplemented by reasoning, or other forms of knowledge that confirm or question the intuitively-reached conclusion. If we introduce some conflicting knowledge, e.g. that the robot thinks it’s the middle of the night in it’s current location, we then create a circumstance in which there is dissonance between two sources of knowledge – the perception of sunlight and the time of day. This assumes the robot has elaborated knowledge about where and when the sun is above the horizon and can potentially shine (e.g. through language – see below).
In people the dissonance triggers the emotional state of ‘surprise’ and the accompanying motivation to account for the contradiction.
Youtube Video, Cognitive Dissonance, B2Bwhiteboard, February 2012, 1:37 minutes
Likewise, we might label the process that causes the search for an explanation in the robot as ‘surprise’. An attempt may be made to resolve this dissonance through Kahneman’s slower, more reasoned, system 2 thinking. Either the perception is somehow faulty, or the knowledge about the time of day is inaccurate. Maybe the robot has mistaken the visual and audio input as coming from its local senses when in fact the input has originated from the other side of the world. (Fortunately, people do not have to confront the contradictions caused by having distributed sensory systems).
Probably in the course of reasoning about how to reconcile the conflicting inputs, the robot will have had to run through some alternative possible scenarios that could account for the discrepancy. These may have been generated by working through other memories associated with either the perceptual inputs or other factors that have frequently led to mis-interpretations in the past. Sometimes it may be necessary to construct unique possible explanations out of component part explanations. Sometimes an explanation may emerge through the effect of numerous ideas being ‘primed’ through the spreading activation of associated memories. Under these circumstances, you might easily say that the robot was using it’s imagination in searching for a solution that had not previously been encountered.
Youtube Video, TEDxCarletonU 2010 – Jim Davies – The Science of Imagination, TEDx Talks, September 2010, 12:56 minutes
Lastly, to faith and language as sources of knowledge. Faith is different because, unlike all the other sources, it does not rely on evidence or proof. If the robot believed, on faith, that the sun was shining, any contradictory evidence would be discounted, perhaps either as being in error or as being irrelevant. Faith is often maintained by others, and this could be regarded as a form of evidence, but in general if you have faith in or trust something, it is at least filling the gap between the belief and the direct evidence for it.
Here is a religious account of faith that identifies it with trust in the reliability of God to deliver, where the main delivery is eternal life.
Youtube video, What is Faith – Matt Morton – The Essence of Faith – Grace 360 conference 2015,Grace Bible Church, September 2015, 12:15 minutes
Language as a source of evidence is a catch-all for the knowledge that comes second hand from the teachings and reports of others. This is indirect knowledge, much of which we take on trust (i.e. faith), and some of which is validated by direct evidence or other indirect evidence. Most of us take on trust that the solar system exists, that the sun is at the centre, and that earth is in the third orbit. We have gained this knowledge through teachers, friends, family, tv, radio, books and other sources that in their turn may have relied on astronomers and other scientist who have arrived at these conclusions through observation and reason. Few of us have made the necessary direct observations and reasoned inferences to have arrived at the conclusion directly. If our robot were to consult databases of known ‘facts’, put together by people and other robots, then it would be relying on knowledge through this source.
People like to think that their own beliefs are ‘true’ and that these beliefs provide a solid basis for their behaviour. However, the more we find out about the psychology of human belief systems the more we discover the difficulties in constructing consistent and coherent beliefs, and the shortcomings in our abilities to construct accurate models of ‘reality’. This creates all kinds of difficulties amongst people in their agreements about what beliefs are true and therefore how we should relate to each other in peaceful and productive ways.
If we are now going on to construct artificial intelligences and robots that we interact with and have behaviours that impact the world, we want to be pretty sure that the beliefs a robot develops still provide a basis for understanding their behaviour.
Unfortunately, every one of the ‘ways of knowing’ is subject to error. We can again go through them one by one and look at the pitfalls.
Sensory perception: We only have to look at the vast body of research on visual illusion (e.g. see ‘Representations of Reality – Part 1’) to appreciate that our senses are often fooled. Here are some examples related to colour vision:
Youtube Video, Optical illusions show how we see | Beau Lotto,TED, October 2009, 18:59 minutes
Furthermore, our perceptions are heavily guided by what we pay attention to, meaning that we can miss all sorts of significant and even life-threatening information in our environment. Would a robot be similarly misled by its sensory inputs? It’s difficult to predict whether a robot would be subject to sensory illusions, and this might depend on the precise engineering of the input devices, but almost certainly a robot would have to be selective in what input it attended to. Like people, there could be a massive volume of raw sensory input and every stage of processing from there on would contain an element of selection and interpretation. Even differences in what input devices are available (for vision, sound, touch or even super-human senses like perception of non-visual parts of the electromagnetic spectrum), will create a sensory environment (referred to as the ‘umwelt’ or ‘merkwelt’in ethology) that could be quite at variance with human perceptions of the world.
YouTube Video, What is MERKWELT? What does MERKWELT mean? MERKWELT meaning, definition & explanation, The Audiopedia, July 2017, 1:38 minutes
Memory: The fallibility of human memory is well documented. See, for example, ‘The Story of Your Life’, especially the work done by Elizabeth Loftus on the reliability of memory. A robot, however, could in principle, given sufficient storage capacity, maintain a perfect and stable record of all its inputs. This is at variance with the human experience but could potentially mean that memory per se was more accurate, albeit that it would be subject to variance in what input was stored and the mechanisms of retrieval and processing.
Intuition and reason: This is the area where some of the greatest gains (and surprises) in understanding have been made in recent years. Much of this progress is reported in the work of Daniel Kahneman that is cited many times in these writings. Errors and biases in both intuition (system 1 thinking) and reason (system 2 thinking) are now very well documented. A long list of cognitive biases can be found at:
Would a robot be subject to the same type of biases? It is already established that many algorithms, used in business and political campaigning, routinely build in the biases, either deliberately or inadvertently. If a robot’s processes of recognition and pattern matching are based on machine learning algorithms that have been trained on large historical datasets, then bias is virtually guaranteed to be built into its most basic operations. We need to treat with great caution any decision-making based on machine learning and pattern matching.
Youtube Vide, Cathy O’Neil | Weapons of Math Destruction, PdF YouTube, June 2015, 12:15 minutes
As for reasoning, there is some hope that the robustness of proofs that can be achieved computationally may save the artificial intelligence or robot from at least some of the biases of system 2 thinking.
Emotion: Biases in people due to emotional reactions are commonplace. See, for example:
Youtube Video, Unconscious Emotional Influences on Decision Making, The Rational Channel, February 2017, 8:56 minutes
However, it is also the case that emotions are crucial in decision–making. Emotions often provide the criteria and motivation on which decisions are made and without them, people can be severely impaired in effective decision-making. Also, emotions provide at least one mechanism for approaching the subject of ethics in decision-making.
Youtube Video, When Emotions Make Better Decisions – Antonio Damasio, FORA.tv, August 2009, 3:22 minutes
Can robots have emotions? Will robots need emotions to make effective decisions? Will emotions bias or impair a robot’s decision-making. These are big questions and are only touched on here, but briefly, there is no reason why emotions cannot be simulated computationally although we can never know if an artificial computational device will have the subjective experience of emotion (or thought). Probably some simulation of emotion will be necessary for robot decision-making to align with human values (e.g. empathy) and, yes, a side-effect of this may well be to introduce bias into decision-making.
For a selection of BBC programmes on emotions see:
Imagination: While it doesn’t make much sense to talk about ‘error’ when it comes to imagination, we might easily make value-judgments about what types of imagination might be encouraged and what might be discouraged. Leaving aside debates about how, say excessive experience of violent video games, might effect imagination in people, we can at least speculate as to what might or should go on in the imagination of a robot as it searches through or creates new models to help predict the impacts of its own and others behaviours.
A big issue has arisen as to how an artificial intelligence can explain its decision-making to people. While AI based on symbolic reasoning can potentially offer a trace describing the steps it took to arrice at a conclusion, AIs based on machine learning would be able to say little more than ‘I recognized the pattern as corresponding to so and so’, which to a person is not very explanatory. It turns out that even human experts are often unable to provide coherent accounts of their decision-making, even when they are accurate.
Having an AI or robot account for its decision-making in a way understandable to people is a problem that I will address in later analysis of the human operating system and, I hope, provide a mechanism that bridges between machine learning and more symbolic approaches.
Faith: It is often said that discussing faith and religion is one of the easiest ways to lose friends. Any belief based on faith is regarded as true by definition, and any attempt to bring evidence to refute it, stands a good chance of being regarded as an insult. Yet people have different beliefs based on faith and they cannot all be right. This not only creates a problem for people, who will fight wars over it, but it is also a significant problem for the design of AIs and robots. Do we plug in the Muslim or the Christian ethics module, or leave it out altogether? How do we build values and ethical principles into robots anyway, or will they be an emergent property of its deep learning algorithms. Whatever the answer, it is apparent that quite a lot can go badly wrong if we do not understand how to endow computational devices with this ‘way of knowing’.
Language: As observed above, this is a catch-all for all indirect ‘ways of knowing’ communicated to people through media, teaching, books or any other form of communication. We only have to consider world wars and other genocides to appreciate that not everything communicated by other people is believable or ethical. People (and organizations) communicate erroneous information and can deliberately lie, mislead and deceive.
We strongly tend to believe information that comes from the people around us, our friends and associates, those people that form part of our sub-culture or in-group. We trust these sources for no other reason than we are familiar with them. These social systems often form a mutually supporting belief system, whether or not it is grounded in any direct evidence.
Youtube Video, The Psychology of Facts: How Do Humans (mis)Trust Information?, YaleCampus, January 2017
Taking on trust the beliefs of others that form part of our mutually supporting social bubble is a ‘way of knowing’ that is highly error prone. This is especially the case when combined with other ‘ways of knowing’, such as faith, that in their nature cannot be validated. Will robot communities develop, who can talk to each other instantaneously and ‘telepathically’ over wireless connections, also be prone to the bias of groupthink?
The validation of beliefs
So, there are multiple ways in which we come to know or believe things. As Descartes argued, no knowledge is certain (see ‘It’s Like This’). There are only beliefs, albeit that we can be more sure of some that others, normally by virtue of their consistency with other beliefs. Also, we note that our beliefs are highly vulnerable to error. Any robot operating system that mimics humans will also need to draw on the many different ‘ways of knowing’ including a basic set of assumptions that it takes to be true without necessarily any supporting evidence (it’s ‘faith’ if you like). There will also need to be many precautions against AIs and robots developing erroneous or otherwise unacceptable beliefs and basing their behaviours on these.
There is a mechanism by which we try to reconcile differences between knowledge coming from different sources, or contradictory knowledge coming from the same source. Most people seem to be able to tolerate a fair degree of contradiction or ambiguity about all sorts of things, including the fundamental questions of life.
Youtube Video, Defining Ambiguity, Corey Anton, October 2009, 9:52 minutes
We can hold and work with knowledge that is inconsistent for long periods of time, but nevertheless there is a drive to seek consistency.
In the description of the human operating system, it would seem that there are many ways in which we establish what we believe and what beliefs we will recruit to the solving of any particular problem. Also, the many sources of knowledge may be inconsistent or contradictory. When we see inconsistencies in others we take this as evidence that we should doubt them and trust them less.
Youtube Video, Why Everyone (Else) is a Hypocrite, The RSA, April 2011, 17:13 minutes
However, there is, at least, a strong tendency in most people, to establish consistency between beliefs (or between beliefs and behaviours), and to account for inconsistencies. The only problem is that we are often prone to achieve consistency by changing sound evidence-based beliefs in preference to the strongly held beliefs based on faith or our need to protect our sense of self-worth.
Youtube Video, Cognitive dissonance (Dissonant & Justified), Brad Wray, April 2011. 4:31 minutes
From this analysis we can see that building AIs and robots is fraught with problems. The human operating system has evolved to survive, not to be rational or hold high ethical values. If we just blunder into building AIs and robots based on the human operating system we can potentially make all sorts of mistakes and give artificial agents power and autonomy without understanding how their beliefs will develop and the consequences that might have for people.
Fortunately there are some precautions we can take. There are ways of thinking that have been developed to counter the many biases that people have by default. Science is one method that aims to establish the best explanations based on current knowledge and the principle of simplicity. Also, critical thinking has been taught since Aristotle and fortunately many courses have been developed to spread knowledge about how to assess claims and their supporting arguments.
Youtube Video, Critical Thinking: Issues, Claims, Arguments, fayettevillestatenc, January 2011
Sensory perception – The robot’s ‘umwelt’ (what it can sense) may well differ from that of people, even to the extent that the robot can have super-human senses such as infra-red / x-ray vision, super-sensitive hearing and smell etc. We may not even know what it’s perceptual world is like. It may perceive things we cannot and miss things we find obvious.
Memory – human memory is remarkably fallible. It is not so much a recording, as a reconstruction based on clues, and influenced by previously encountered patterns and current intentions. Given sufficient storage capacity, robots may be able to maintain memories as accurate recording of the states of their sensory inputs. However, they may be subject to similar constraints and biases as people in the way that memories are retrieved and used to drive decision-making and behaviour.
Intuition – if the robot’s pattern-matching capabilities are based on the machine learning of historical training sets then bias will be built into its basic processes. Alternatively, if the robot is left to develop from it’s own experience then, as with people, great care has to be taken to ensure it’s early experience will not lead to maladaptive behaviours (i.e. behaviours not acceptable to the people around it).
Reason – through the use of mathematical and logical proofs, robots may well have the capacity to reason with far greater ability than people. They can potentially spot (and resolve) inconsistencies arising out of different ‘ways of knowing’ with far greater adeptness than people. This may create a quite different balance between how robots make decisions and how people do using emotion and reason in tandem.
Emotion – human emotion are general states that arise in response to both internal and external events and provide both the motivation and the criteria on which decisions are made. In a robot, emerging global states could also potentially act to control decision-making. Both people, and potentially robots, can develop the capacity to explicitly recognize and control these global states (e.g. as when suppressing anger). This ability to reflect, and to cause changes in perspective and behaviour, is a kind of feedback loop that is inherently unpredictable. Not having sufficient understanding to predict how either people or robots will react under particular circumstances, creates significant uncertainty.
Imagination – much the same argument about predictability can be made about imagination. Who knows where either a person’s or a robot’s imagination may take them? Chess computers out-performed human players because of their capacity to reason in depth about the outcomes of every move, not because they used pattern-matching based on machine learning (although it seems likely that this approach will have been tried and succeeded by now). Robots can far exceed human capacities to reason through and model future states. A combination of brute force computing and heuristics to guide search, may have far-reaching consequences for a robot’s ability to model the world and predict future outcomes, and may far exceed that of people.
Faith – faith is axiomatic for people and might also be for robots. People can change their faith (especially in a religious, political or ethical sense) but more likely, when confronted with contradictory evidence or sufficient need (i.e. to align with a partner’s faith) people with either ignore the evidence or find reasons to discount it. This way can lead to multiple interpretations of the same basic axioms, in the same way as there are many religious denominations and many interpretations of key texts within these. In robots, Asimov’s three laws of robotics would equate to their faith. However, if robots used similar mechanisms as people (e.g. cognitive dissonance) to resolve conflicting beliefs, then in the same way as God’s will can be used to justify any behaviour, a robot may be able to construct a rationale for any behaviour whatever its axioms. There would be no guarantee that a robot would obey its own axiomatic laws.
Communication – The term language is better labeled ‘communication’ in order to make it more apparent that it extends to all methods by which we ‘come to know’ from sources outside ourselves. Since communication of knowledge from others is not direct experience, it is effectively taken on trust. In one sense it is a matter of faith. However, the degree of consistency across external sources and between what is communicated (i.e. that a teacher or TV will re-enforce what a parent has said etc.) and between what is communicated and what is directly observed (for example, that a person does what he says he will do) will reveal some sources as more believable than others. Also we appeal to motive as a method of assessing degree of trust. People are notoriously influenced by the norms, opinions and behaviours of their own reference groups. Robots with their potential for high bandwidth communication could, in principle, behave with the same psychology of the crowd as humans, only much more rapidly and ‘single-mindedly’. It is not difficult to see how the Dr Who image of the Borg, acting a one consciousness, could come about.
Other Ways of Knowing
It is worth considering just a few of the many other ‘ways’ of knowing’ not considered above, partly because some of these might help mitigate some of the risks of human ‘ways of knowing’ .
Science – Science has evolved methods that are deliberately designed to create impartial, robust and consistent models and explanations of the world. If we want robots to create accurate models, then an appeal to scientific method is one approach. In science, patterns are observed, hypotheses are formulated to account for these patterns, and the hypotheses are then tested as impartially as possible. Science also seeks consistency by reconciling disparate findings into coherent overall theories. While we may want robots to use scientific methods in their reasoning, we may want to ensure that robots do not perform experiments in the real world simply for the sake of making their own discoveries. An image of concentration camp scientists comes to mind. Nevertheless, in many small ways robots will need to be empirical rather than theoretical in order to operate at all.
Argument – Just like people, robots of any complexity will encounter ambiguity and inconsistencies. These will be inconsistencies between expectation and actuality, between data from one way of knowing and another (e.g. between reason and faith, or between perception and imagination etc.), or between a current state and a goal state. The mechanisms by which these inconsistencies are resolved will be crucial. The formulation of claims; the identification, gathering and marshalling of evidence; the assessment of the relevance of evidence; and the weighing of the evidence, are all processes akin to science but can cut across many ‘ways of knowing’ as an aid to decision making. Also, this approach may help provide explanations of a robot’s behaviour that would be understandable to people and thereby help bridge the gap between opaque mechanisms, such as pattern matching, and what people will accept as valid explanations.
Meditation – Meditation is a place-holder for the many ways in which altered states of consciousness can lead to new knowledge. Dreaming, for example, is another altered state that may lead to new hypotheses and models based on novel combination of elements that would not otherwise have been brought together. People certainly have these altered states of consciousness. Could there be an equivalent in the robot, and would we want robots to indulge in such extreme imaginative states where we would have no idea what they might consist of? This is not to necessarily attribute consciousness to robots, which is a separate, and probably meta-physical question.
Theory of mind – For any autonomous agent with its own beliefs and intentions, including a robot, it is crucial to its survival to have some notion of the intentions of other autonomous agents, especially when they might be a direct threat to survival. People have sophisticated but highly biased and error-prone mechanisms for modelling the intentions of others. These mechanisms are particularly alert for any sign of threat and, as a proven mechanism, tend to assume threat even when none is present. The people that did not do this, died out. Work in robotics already recognizes that, to be useful, robots have to cooperate with people and this requires some modelling of their intentions. As this last video illustrates, the modelling of others intentions is inherently complex because it is recursive.
YouTube Video, Comprehending Orders of Intentionality (for R. D. Laing), Corey Anton, September 2014, 31:31 minutes
If there is a conclusion to this analysis of ‘ways of knowing’ it is that creating intelligent, autonomous mechanisms, such as robots and AIs, will have inherently unpredictable consequences, and that, because the human operating system is so highly error-prone and subject to bias, we should not necessarily build them in our own image.
How we manage the demands on us has been a pre-occupation since the day I came to the realisation that a lot of what runs through my own mind can be explained in terms of what psychologists call the management of ‘cognitive load’ or ‘mental workload’. We all, to some extent, ‘manage’ what we think about, but we rarely reflect on exactly how we do it.
Sometimes there are so many things that need to be thought about (and acted upon) that it is overwhelming, and some management of attention is needed, just to get through the day and maintain performance. If you need convincing that workload can affect performance then consider the research on distractions when driving. (A more comprehensive analysis on ‘the distracted mind’ can be found at the end of this posting).
YouTube Video, The distracted mind, TEDPartners, December 2013, 1:39 minutes
At other times you find yourself twiddling your thumbs, as if waiting for something to happen, or a thought to occur that will trigger action. We sometimes cease to be in the grip of circumstances and our minds can run free.
If you keep asking the question ‘why?’ about anything that you do, you eventually arrive at a small number of answers. If we leave aside metaphysical answers like ‘because it is the will of God’ for the moment, these are generally ‘to keep safe’ or ‘to be efficient’. On the way to these fundamental, and intimately related to them, is ‘to optimize cognitive load’. Not to do so, compromises both safety and efficiency.
To be overwhelmed with the need to act and, therefore, the thinking this necessitates in the evaluation of choices that are the precursors of action, leads to anxiety and anxiety interferes with the capacity to make good choices. To be under-whelmed leads to boredom and lethargy, a lack of caring about choice and the tendency to procrastinate.
It seems that to perform well we need an optimal level of arousal or stimulation.
Youtube Video, Performance and Arousal – Part 1of 3: Inverted U Hypothesis, HumberEDU, January 2015, 5:05 minutes
In the longer term, to be ‘psychologically healthy’ we need optimal levels of arousal ‘on average’ over a period of time.
Being constantly overwhelmed leads from stress, to anxiety and onwards to depression. It can even lead to an early death.
TED Video, The science of cells that never get old’ – Elizabeth Blackburn, TED, April 2017, 18:46 minutes
Being constantly underwhelmed also leads to depression via a different route. How much load we can take depends on our resources – both cognitive and otherwise. We can draw on reserves of energy and a stock of strategies, such as prioritizing, for managing mental workload. If the demands on us are too great and we have some external resources, like somebody else that can provide advice or direction, or the money to pay for it, then we can use those to lessen the load. Whenever we draw on our own capacities and resources we can both enhance and deplete them. Like exercising a muscle, regular and moderate use can strengthen but prolonged and heavy use will tire or deplete them. When we draw on external resources, like money or favours, their use tends to deplete them.
Measurement of Load
So how can we measure the amount of load a person is carrying. This is going to be tricky as some people have more resource and capacity (both internal and external) than others, so observing their activity may not be a very accurate measure of load. If you are very practiced or skilled at something it is much easier to do than if you are learning it for the first time. Also, some people are simply less bothered about whether they achieve what they have to do (or want to do) than others. Even the same person can ‘re-calibrate’, so for example, if pressure of work is causing stress, they can re-assess how much it matters that they get the job done. Some capacities replenish with rest, so something may be easy at one time but harder, say at the end of a long day.
In fact, there are so many factors, some interacting, that any measure, say of stress, through looking at the chemicals in the blood or the amount of sweating on the skin is difficult to attribute to a particular cause.
The capacity of our thinking processes is limited. We can really only focus on one difficult task at once. We even stop doing whatever we were doing (even an automatic task like walking) when formulating the response to a difficult question.
BBC Radio 4, The Human Zoo, Series 1 Episode 1, First Broadcast about 2014, 28 minutes
We can use our thinking capacity to further our intentions but we so often get caught up in the distractions of everyday life that none is left for addressing the important issues.
The Personal Agenda
Another way of looking at it is to consider it from the point of view of each person’s agenda and how they deal with it. This is as if you ask a person to write down a ‘to do’ list which has everything they could think of on it. We all do this from time to time, especially when there is too much going on in our heads and we need to set everything out and see what is important.
I will construct such a list for myself now:
- Continue with what I am writing
- Get ready for my friend who is coming for coffee
- Figure out how to pay my bills this month
- Check with my son that he has chosen his GCSE options
- Tell my other friends what time I will meet them tonight
- Check that everything is OK with my house at home (as I am away at the moment)
Each of these agenda items is a demand on my attention. It is as if each intention competes with the others to get my focus. They each shout their demands, and whichever is shouting loudest at the time, wins. Maybe not for long. If I realise that I can put something off until later, it can quickly be dismissed and slip back down the agenda.
But the above list is a particular type that is just concerned with a few short-term goals – it’s the things that are on my mind today. I could add:
- Progress my project to landscape the garden
- Think through how to handle a difficult relationship
Or some even longer term, more aspirational and less defined goals
- Work out how I will help starving children in Africa
- Maintain and enhance my wellbeing
The extended agenda still misses out a whole host of things that ‘go without saying’ such as looking after my children, activities that are defined during the course of going to work, making sure I eat and sleep regularly, and all tasks that are performed on ‘autopilot’ such as changing gear when driving. It also misses out things that I would do ‘if the opportunity arose’ but which I do not explicitly set out to do.
What characterizes the items that form the agenda? They are all intentions of one sort or another but they can be classified in various ways. Many concern obligations – either to family, friends, employers or society more generally. Some are entirely self-motivated. Some have significant consequences if they are not acted upon, especially the obligations, whereas others matter less. Some need immediate attention while others are not so time critical. Some are easy to implement while others require some considerable training, preparation or the sustained execution of a detailed plan. Some are one-offs while others are recurring, either regularly or in response to circumstances. This variation tends to mask the common characteristic that they are all drivers of thought and behaviour.
Intentions bridge between and include both motives and goals. Generally we can think of motives as the inputs and goals as the outputs (although either can be either). Both the motives and the goals of an intention can be vague. In fact, an intention can exist without you knowing either why or what it is to achieve. You can copy somebody else’s intention in ignorance of motive and goal. In the sense of intention as only a pre-disposition to act, you need not be aware of an intention. Often you don’t know how you will act until the occasion demands.
Given that there are perhaps hundreds or even thousands of intentions large or small, all subsisting in the same individual, what determines what a person does at any particular point in time. It all depends on priority and circumstance. Priority will push items to the top of the list and circumstance often determines when and how they drive thought and behaviour.
Priority itself is not a simple idea. There are many factors affecting priority including emotion, certainty of outcome and timing. These factors tend to interact. I may feel strongly that I must help starving children in Africa and although I know that every moment I delay may mean a life lost, I cannot be certain that my actions will make a difference or that I may think of a more effective plan at a later time. When I have just seen a programme on TV about Africa I may be highly motivated, but a day later, I may have become distracted by the need to deal with what now appear to be more urgent issues where I can be more certain of the outcome.
Priority and Emotion
It is as if my emotional reaction to the current content of my experience is constantly jiggling the priorities on my agenda of intentions. As the time approaches for my friend to arrive I start to feel increasingly uncomfortable that I have not cleared up. Events may occur that ‘grab’ my attention and shoot priorities to the top of the agenda. If I am surprised by something I turn my attention to it. Similarly, if I feel threatened. Whereas, when I am relaxed my mind can wander to matters inside my head – perhaps my personal agenda. If I am depressed my overall capacity to attend to and progress intentions is reduced.
Emotions steer our attention. They determine priority. Attention is focused on the highest priority item. Emotion, priority, and attentions are intimately related. Changing emotions continuously wash across intentions, reordering their priority. They modulate the priorities of the intentions of the now.
Emotion provides the motive force that drives attention to whatever it is that you are attending to. If you are working out something complicated in your head, it is the emotion associated with wanting to know the answer that provides the motive force to turn the cogs. This applies even when the intention is to think through something rationally. When in the flow of rational thought (say in doing a mental arithmetic problem) it is emotion that motivates it.
There is a host of literature on emotional memory (i.e. how emotions, especially traumatic ones, are laid down in memory). There is also a large literature on how memories may be re-constructed, often inaccurately, rather than retrieved. The following illustrates both emotional memory of traumatic events and the frequent inaccuracies of re-construction:
TED Video, Emotional Memory: Shawn Hayes at TEDxSacramento 2012, TEDx Talks, March 2013, 8:10 minutes
It is well established that the context in which a memory is laid down effects the circumstances in which the memory is retried. For example, being in a particular place, or experiencing a particular smell or taste may trigger the retrieval of memories specific to the place and smell. The context supplies the cue or key to ’unlocking’ the memory. However, there is comparatively little literature on how emotions trigger memories although there has been research on ‘mood-dependent memory’ (MDM) e.g.
Eric Eich, Dawn Macaulay, and Lee Ryan (1994), Mood Dependent Memory for Events of the Personal Past, Journal of Experimental Psychology: General 1994, Vol. 123, No. 2, 201-215
It seems plausible that emotions act as keys or triggers that prime particular memories, thoughts and intentions. In fact, the research indicates that mood dependent memory is more salient in relation to internal phenomena (e.g. thoughts) than external ones (such as place). Sadness steers my attention to sad things and the intentions I associate with the object(s) of my sadness. Indifference will steer my attention away from whatever I am indifferent about and release attention for something more emotionally charged. Love and hate might equally steer attention to its objects. Injustice will steer attention to ascertaining blame. The task of identifying who or what to blame can be as much an intention as any other.
Priority and Time – The Significance of Now
Intentions formulated and executable in ‘the now’, assume greater priority than those formulated in the past, or those that may only have consequences in the future.
The now is of special significance because that is where attention is focused. Past intentions slip down the list like old messages in an email inbox. You focus on the latest delivery – the now.
The special significance of ‘the now’ is increasingly recognised, not just as a fact of life but as something to become increasingly conscious of and savoured.
Youtube Video, The Enjoyment of Being with Eckhart Tolle author of THE POWER OF NOW, New World Library, July 2013, 4:35 minutes
Indeed the whole movement of mindfulness, with its focus on ‘the now’ and conscious experience, has grown up as approach to the management of stress and the development of mental strategies.
Youtube Video, The Science of Mindfulness, Professor Mark Williams, OxfordMindfulness, December 2011, 3:34 minutes
Priority and Time in Agenda Management
If I am angry now then my propensity will be high to act on that anger now if I am able to. Tomorrow I will have cooled off and other intentions will have assumed priority. Tomorrow I may not have ready access to the object of my anger. On the other hand, if tomorrow an opportunity arises by chance (without me having created it), then perhaps I will seize it and act on the anger then. As in crime, we are driven by the motive, the means and the opportunity.
Many intentions recur – the intentions to eat, drink, sleep, and seek social interaction all have a cyclical pattern and act to maintain a steady state or a state that falls within certain boundaries (homeostasis). It may be that you need to revive an old intention whether or not it is cyclically recurring. Revival of an intention pushes it back up the list (towards the now) and when some homeostatic system (like hunger and eating) get out of balance a recurring intention is pushed back up the list.
Intentions that impact the near future also take priority over intentions that affect the far future. So, it is easier to make a cup of tea than sit down and write your will (except when death is in the near future). We exponentially discount the future. 1 minute, 1 hour and 1 day, 1 week, 1 month, 1 season and 1 year are equally far apart. What happens in the next minute is as important as what will happen in the next year.
However, from the point of view of establishing the principles of a control mechanism that determines our actions at any point there are other complications and considerations. Often our intentions are incompatible or compete with each other. I cannot vent my anger and fulfil an intention not to hurt anybody. I cannot eat and stay thin. I cannot both go to work tomorrow and stay home to look after my sick child. Therefore, some intentions inhibit others leading to a further jiggling of the priorities.
Prioritising what is Easy
A major determinant of what we actually do is what is easiest to do. So actions that are well learned or matters of habit get done without a second thought but intentions that are complicated or difficult to achieve are constantly pushed down the stack, however important they are. Easy actions consume less resource. If they are sufficiently difficult and also sufficiently important we become pre-occupied by thinking about them but are unable to act.
Daniel Kahneman in his book ‘Thinking Fast and Slow’ sets out much of the experimental evidence that shows how in thought we tend towards the easy options.
Youtube Video, Cognitive ease, confirmation bias, endownment effect – Thinking, Fast and Slow (Part 2), Fight Mediocrity, June 2015, 5:50 minutes
How often do we get up in the morning with the firm resolve to do a particular thing and then become distracted during the day by what seem to be more immediate demands or attractive alternatives? It is as if our intentions are being constantly pushed around by circumstances and our reactions to them and all that gets done are the easy things – where by chance the motive, the means, and the opportunity all fortuitously concur in time.
Staying on task is difficult. It requires a single-minded focus of attention and a resistance to distraction. It is sometimes said that ‘focus’ is what differentiates successful people from others, and while that may be true in the achievement of a particular goal, it is at the expense of paying attention to other competing intentions.
Implications for The Human Operating System
The above account demonstrates how as people we interleave multiple tasks in real time, partly in response to what is going on around us and partly in response to our internal agenda items. We do this with limited resources, depleting and restoring capacities as we go. What differentiates us from computers is the way in which priorities are continuously and globally changing such that attention is re-directed in real time to high priority items (such as threats and the unexpected). Part of this is in response to our ability to retrieve relevant memories cued by the external world and our internal states, reflect on (and inhibit) our own thinking and thinking processes and to run through and evaluate mental simulations of possible futures.
- In order to perform effectively we need to manage the demands on us
- Having too much, or too little, to do and think about can lead to stress in the short term and depression, if it goes on for too long.
- We have limited resources and capacities which can become depleted but that can also be restored (e.g. with rest)
- Measuring the amount of load a person is under is not simple as people have different resources, abilities and capacities
- Whether or not we write it down or say it, we all have an implicit list of intentions
- We prioritise the items on this list in a variety of ways
- Circumstances, our emotional reactions and timing are all crucial factors in determining priority
- We also tend to prioritise things that are easy to do (i.e. do not use up effort, time or other resources)
- Being able to manage priorities and interleaving our intentions in response to circumstances and opportunity, is a key aspect of the human operating system
This Blog Post: ‘Human Operating System 2 – Managing Demands’ introduces how we deal with the complex web of intentions (our own and those externally imposed) that form part of our complex daily lives
Next Up: ‘Policy Regulates Behaviour’ shows that not all intentions are equal. Some intentions regulate others, in both the individual and society.
Youtube Vide, The Distracted Mind, UCI Open, April 2013, 1:12:37 hours