Home » Theory of Knowledge (ToK) » – Representations of reality 1

Subscribe via Email

Join 2 other subscribers.

– Representations of reality 1

What is the relationship between the world and our mental representation of it? What is the representation that we use to model the world, and run through in our minds alternative futures enabling us to anticipate and predict what might happen? How do we ‘mind the gap‘ between our expectations and our experience and work out how to fill our unmet needs? Things are not always what we expect.

YouTube Video, 10 Amazing Illusions – Richard Wiseman, Quirkology, November 2012, 2:36 minutes

Previous blogs considered how being oriented, and having purpose, formed the basis for having control, and how when needs were un-met, without control, wellbeing will suffer. Orientation was seen as a mental map or model that allows us to navigate around our knowledge and thoughts, to know where we are going and to plan the necessary steps on the way.

Representation is Crucial

I want to know whether it is shorter to go from B to D via A or C. I am told that A is 80 miles west of B. B is 33 miles south of C. C is 95 miles south east of D. D is 83 miles north of A. A is 103 south west of C. What’s the answer?

It is very difficult to figure this out without drawing a map or diagram. With a map the answer is visually obvious. Even knowing that A is Swindon, B is London, C is Stevenage, and D is Birmingham doesn’t help much unless you have a good knowledge of UK geography and can see the problem in your ‘mind’s eye’.

But even problems like ‘will I be happier taking a boring but highly paid job at the bank or a more challenging teaching job?’ are difficult to think about without employing some spatial reasoning, perhaps because they can involve some degree of quantitative comparison (across several dimensions – happiness, financial reward, degree of challenge etc.).
How you represent a problem is crucial to whether or not it is easy or difficult to solve.

The ‘framing’ of a problem and the mindset you bring to it, considerably influences which kinds of solutions are easy to find and which are near to impossible. If we think the sun goes round the earth then we will have considerably more difficulty predicting the positions of the planets than if we think the earth goes round the sun. If we think somebody is driven by a depressive disease when in fact their circumstances are appalling, we may give them medication rather than practical help. Having a suitable representation and mindset are crucial to enabling control.

The wonderful thing is that people can re-invent representations and make difficult problems easy. However, this often takes effort and because we are lazy, for the most part we do not bother and continue to do things in the same old way – until, that is, we get a surprise or shock that makes us think again.

Language and Thought

So familiar and ingrained is the notion of orientation and navigation that spatial metaphors are rife in language – ‘I don’t know which way to turn’, ‘she’s a distant relative but a close friend’, ‘house prices are climbing’, ‘I take a different position’ etc. However, language may only be a symptom or product of our thoughts and not the mental representation itself.

Philosophers and linguists have long speculated on the relationship between language and thought. Is it possible to think about certain things without the aid of linguistics hooks to hang the thoughts on?

Steven Pinker considers language as a window on how we think. Our choice and use of different linguistic constructions reveals much of the subtlety and nuances of our thoughts and intentions. How we phrase a sentence is as much to do with allowing space for interpretation, negotiation and the management of social roles as it is to do with the ‘face value’ communicating of information.

TED Video, Steven Pinker: What our Language Habits reveal, TED, September 2007, 17:41 minutes

Pinker also differentiates thought and language, demonstrating that it is possible to have thought without language and that we think first and then put language to the thoughts in order to communicate. For example, babies and animals are able to make sense of the world without being able to put it into language. We translate between different languages by reference to underlying meaning. Pinker uses the term ‘mentalise’ as the ‘language’ of thought. We often think with our senses, in images, sounds and probably also our other senses. We can also think non-linguistically in terms of propositions and abstract notions. This is not to say that language and thought are not intimately bound up – what one person says influences what another person thinks. However, the fact that words can be invented to convey new concepts suggest that the thoughts can come first and the language is created as a tool to capture and convey the thought.

TED Video, Stephen Pinker: Language and Consciousness, Part 1 Complete: Thinking Allowed w/ J. Mishlove , ThinkingAllowedTV, October 2012, 27:17 minutes

But just as language reflects and may constrain thought, it also facilitates it and allows us to see things from different perspectives without very much effort. In general, metaphor allows us to think of one concept in terms of another. In so doing it provides an opportunity to compare the metaphor to the characteristics of the thing we are referring to – ‘shall I compare thee to a summer’s day?’. A summer’s day is bright, care-free, timeless and so forth. Metaphor opens up the possibility of attributing new characteristics that were not at first considered. It releases us from literal, figurative thought and takes us into the realm of possibility and new perspectives.

TED Video, James Geary, Metaphorically Speaking, TED, December 2009, 10:44 minutes


Mental Models

Despite the importance of language as both a mechanism of capturing and shaping thought, it is not the only way that thought is represented. In fact it is a comparatively high level and symbolic form of representation. Thoughts, for example, can be driven by perception, and to illustrate this it is useful to think about perceptual illusions. The following video shows a strong visual illusion that people would describe in language one way, when in fact, it can be revealed to be something else.

YouTube Video, Illusion and Mental Models, What are the odds, March 2014, 2:36 minutes

This video also illustrates the interaction between prior knowledge and the interpretation of what you perceive. It also mentions the tendency to ignore or find the easiest (most available) explanation for information that is ambiguous or difficult to deal with.

Mental representations are often referred to as mental models. Here’s one take of what they are:

Youtube Video, Mental Models, kfw., March 2011, 3:59 minutes

It turns out that much of the most advanced work on mental models has been in the applied area of user interface design. Understanding how a user thinks or models some aspect of the world is the key to the difference between producing a slick, usable design and a design that is unfathomable, frustrating and leads to making slips and mistakes.

Youtube Video, 4 2 Lecture 4 2 Mental Models 15 28, OpenCourseOnline, June 2012, 15:28 minutes

Mental models apply to people’s behaviour (output) in much the same way as they apply to sensory input.

Youtube Video, Visualization – A Mental Skill to learn, Wally Kozak, May 2010, 4:05 minutes

In the same way that an expert learns to ‘see’ patterns quickly and easily (e.g. in recognising a disease), they also learn skilled behaviours (e.g. how to perform an examination or play a game of tennis) by developing an appropriate mental representation. It is possible to apply expert knowledge in, for example, diagnosis or decision making without either language or thought. Once we have attained a high degree of expertise in some subject, much ‘problem solving’ becomes recognition rather than reasoning.

YouTube Video, How do Medical Experts Think?, MjSylvesterMD, June 2013, 4:44 minutes

So mental representations apply at the level of senses and behaviours as well as at the higher levels of problem solving. We can distinguish between ‘automatic’, relatively effort-free thinking (system 1 thinking in Kahneman’s terms) and conscious problem solving thought (system 2 thinking).

System 1 thinking is intuitive and can be the product of sustained practice and mastery. Most perceptual and motor skills are learned in infancy and practiced to the point of mastery without explicitly realising it. In language, a child’s intuitive understanding of grammar (e.g. that you add an s to make a plural) is automatic. System 1 thinking can also be applied to seemingly simple skills, like catching a ball or something seemingly complex, like diagnosing the illness of a patient. A skilled general practitioner often does not have to think about a diagnosis. It is so familiar that it is a kind of pattern recognition. With the automated mechanisms of system 1 thinking you just know how to do it or just see it. It requires no effort.

System 2 thinking, by contrast, requires effort and resource. It is the type of thinking that requires conscious navigation across the territory of one’s knowledge and beliefs. Because this consumes limited resources, it involves avoiding the pitfall, locating the easier downhill slopes and only climbing when absolutely necessary on the way to the destination. It is as if it needs some sort of central cognitive control to allocate attention to the most productive paths.

Computational Approaches

Although, to my knowledge, Daniel Kahneman does not reference it, the mechanism whereby system 2 problem solving type thinking becomes system 1 type automated thinking was described and then thoroughly modelled back in the 1970s and 80s. It is a process called ‘universal sub-goaling and chunking’ and accounts well for empirical data on how skills are learned and improve with practice.

http://www.springer.com/computer/ai/book/978-0-89838-213-6

This theoretical model gave rise to the development of Artificial Intelligence (AI) software called ‘Soar’ to model a general problem solving mechanism.

http://en.wikipedia.org/wiki/Soar_(cognitive_architecture)

According to this mechanism, when confronted with a problem, a search is performed of the ‘problem space’ for a solution. If a solution is not found then the problem is broken down into sub-tasks and a variety of standard methods are used to manage the search for solutions to these. If solutions to sub-goals cannot be found then deeper level sub-goals can be spawned. Once a solution, or path to a solution, is found (at any level in the goal hierarchy) it is stored (or chunked) so that when confronted with the same problem next time it is available without the need for further problem solving or search.

In this way, novel problems can be tackled, and as solutions are found they effectively become automated and easy to access using minimal resource.

The ambitions of the Soar project, which continue at the University of Michigan, are to ‘support all the capabilities of an intelligent agent’. Project funding comes from a variety of sources including the US department of Defense (DARPA).

http://soar.eecs.umich.edu

The Soar architecture is covered in the following Open Courseware Module from MIT.

Youtube Video, 19. Architectures: GPS, SOAR, Subsumption, Society of Mind, MIT OpenCourseWare, January 2014, 40:05 minutes

Whatever the state of the implementation, the Soar cognitive architecture is in close alignment with much else that is described here. It provided insight into the following:


  • How system 1 and system 2 type thinking can be integrated into a single framework
  • How ‘navigation’ around what is currently believed or known might be managed
  • How learning occurs and an explanation for the ‘power law of practice’ (the well established and consistent relationship between practice and skill development over a wide range of tasks)
  • How it is possible to create solutions out of fragmentary and incomplete knowledge
  • How the ‘availability principle’ described by Kahneman can operate to perform quick fixes and conserve resources
  • What a top-down central cognitive control mechanism might look like
  • The possible ways in which disruption to the normal operation of this high level control mechanism might help explain conditions such as autism and dementia


In this blog: ‘The Representation of Reality Enables Control – Part 1’ looked at language and thought, mental models and computational approaches to how the mind represents what it knows about the world (and itself).

Part 2 contrasts thinking in words with thinking in pictures, looking first at how evidence from brain studies inform the debate, and then concludes how all these approaches – linguistic, psychological, computational, neurophysiological and phenomenological are addressing much the same set of phenomena from different perspectives. Can freedom be defined in terms of our ability to reflect on our own perceptions and thoughts?


Leave a comment

Your email address will not be published. Required fields are marked *

How do we embed ethical self-regulation into artificial Autonomous, Intelligent Systems (A/ISs)? One answer is to design architectures for A/ISs that are based on ‘the Human Operating System’ (HOS).

Theory of Knowledge

A simple computer program will know very little about the world, and will have little or no capacity to reflect upon what its knows or the boundaries of its applicability.

Any sophisticated A/IS, if it is to apply ethical principles appropriately, will need to be based on a far more elaborate theory of knowledge (epistemology).

The epistemological view taken in this blog is eclectic, constructivist and pragmatic. As we interact with the world, we each individually experience patterns, receive feedback, make distinctions, learn to reflect, and make and test hypotheses. The distinctions we make, become the default constructs through which we interpret the world and the labels we use to analyse, describe, reason about and communicate. Our beliefs are propositions expressed in terms of these learned distinctions and are validated via a variety of mechanisms, that themselves develop over time and can change in response to circumstances.

We are confronted with a constant stream of contradictions between ‘evidence’ obtained from different sources – from our senses, from other people, our feelings, our reasoning and so on. These surprise us as they conflict with default interpretations. When the contradictions matter, (e.g. when they are glaringly obvious, interfere with our intent, or create dilemmas with respect to some decision), we are motivated to achieve consistency. This we call ‘making sense of the world’, ‘seeking meaning’ or ‘agreeing’ (in the case of establishing consistency with others). We use many different mechanisms for dealing with inconsistencies – including testing hypotheses, reasoning, intuition and emotion, ignoring and denying.

In our own reflections and in interactions with others, we are constantly constructing mini-belief systems (i.e. stories that help orientate, predict and explain to ourselves and others). These mini-belief systems are shaped and modulated by our values (i.e. beliefs about what is good and bad) and are generally constructed as mechanisms for achieving our current intentions and future intentions. These in turn affect how we act on the world.

%d bloggers like this: