Home » Design

Category Archives: Design

Want yet more email?

Enter your email address to get notifications of new posts.

Request contact
If you need to get in touch
Enter your email and click the button

– Next Stop, Biological AI

This truly startling talk by Professor Michael Levin, from the Allen Discovery Center at Tufts University, has implications for everything – not just regenerative medicine.

It is no exaggeration to describe the work done in Levin’s lab as Frankensteinian. This is not a criticism, just an inevitable observation.

Levin describes biochemical interventions that can effect electrical transmission at the inter-cellular level in a range of organisms. These change the parameters for regeneration of body parts and reveal that a non-neural regenerative memory can exist throughout an organism. From the start of evolution of ‘primitive’ life forms, anatomical decision-making is taking place in every cell, and at every level of body structure.

Levin gives a highly informed factual account of findings in bioelectrical computation. Although he only touches on the implications, these techniques potentially lead to a technology that can design new life-forms and biologically-based computation devices.

It seems incredible that research results like these are possible now. It may be years or decades before it translates into medical interventions for humans, or is applied to creating biologically-based artificial intelligence, but the vision is clear.

To me, more frightening than the content of this talk, is the Facebook logo hanging over Levin’s head (no doubt just promotion, but still!).

YouTube Video, What Bodies Think About: Bioelectric Computation Outside the Nervous System – NeurIPS 2018, Artificial Intelligence Channel, December 2018, 52:06 minutes

– Ethics of Eavesdropping

It has been recently reported (e.g. see: Bloomberg News ) that the likes of Amazon, Google and Apple employ people to listen to sample recordings made by the Amazon Echo, Google Home and Siri, respectively. They do this to improve the speech recognition capabilities of these devices.

Ethical Issues

What are the ethical issues here? The problem is not with these companies using people to assist in the training of machine-learning algorithms in order to improve the capabilities of the devices. However there are issues with the following:


  • While information like names and addresses may not accompany the speech clips being listened to, it seems quite possible that other identification would potentially enable tracing back to this information. This seems unnecessary for the purpose of training the speech recognition algorithms.

  • It has been reported that employees performing this function in some companies, have been required to sign agreements that they will not disclose what they are doing. To my mind this seems wrong. If the function is necessary and innocent then companies should be open about it.

  • These companies do not always make it clear to purchasers of devices that they may be recorded, and listened to, by people. This should be clear to users in all advertising and documentation.

  • The most contentious ethical issue is what to do if any employee of one of these companies hears a crime being committed or planned. Another situation arises if an employee overhears something that is clearly private, like bank details, or information that, although legal, could be used to blackmail. In the first situation, are these companies to be regarded as having the same status as a priest in a confessional or any other person that might hear sensitive information? A possible approach is that whatever law applies to human individuals, should also apply to the employees and the companies like Amazon, Google and Apple. So in the UK for example, some workers (such as social workers and teachers) who are likely to occasionally hear sensitive information relating to potential harm to minors, are required to report it. In the second case, companies could be legally liable for losses arising from the information being revealed or used against the user.

It seems likely that companies are reluctant to admit publicly that interactions with these devices may be listened to by people, is because it might affect sales. That’s does not seem a good enough reason.

– Making Algorithms Trustworthy

Algorithms can determine whether you get a loan, predict what diseases you might get and even assess how long you might live.  It’s kind of important we can trust them!

David Spiegelhalter is the Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences at the University of Cambridge. As part of the Cambridge Science Festival he was talking (21st of March 2019) on the subject of making algorithms trustworthy.

I’ve heard David speak on many occasions and he is always informative and entertaining. This was no exception.



Algorithms now regularly advise on book and film recommendations. They work out the routes on your satnav. They control how much you pay for a plane ticket and, annoyingly, they show you advertisements that seem to know far too much about you.

But more importantly they can affect life and death situations. The results of an algorithmic assessment of what disease you might have could be highly influential, affecting your treatment, your well-being and on your future behaviour.

David is a fan of Onora O’Neill who suggests that organisations should not be aiming to increase trust but should aim to demonstrate trustworthiness. False claims about the accuracy of algorithms are as bad as defects in the algorithms themselves.


The pharmaceutical industry has long used a phased approach assessing the effectiveness, safety and side-effects of drugs. This includes the use of randomly controlled trials, and long-term surveillance after a drug comes onto the market, to spot rare side-effects.

The same sorts of procedures should be applied to algorithms. However, currently only the first phase of testing on new data is common. Sometimes algorithms are tested against the decisions that human experts make. Rarely will randomly controlled trials be conducted, or the algorithm in use be subject to long-term monitoring.


As an aside, David entertained us by reporting on how the machine learning community have become obsessed with training algorithms to assess the characteristics of who did or did not survive the Titanic.   Unsurprisingly, being a woman or a child helped a lot. David used this example to present a statistically derived decision tree.  The point he was making was that the decision tree could (at least sometimes) be used as an explanation, whereas machine learning algorithms are generally black boxes (i.e. you can't inspect the algorithm itself).  

Algorithms should be transparent. They should be able to explain their decisions as well as to provide them. But transparency is not enough. O’Neill uses the term 'intelligent openness’ to describe what is required. Explanations need to be accessible, intelligible, usable, and assessable. 

Algorithms need to be both globally and locally explainable. Global explainability relates to the validity of the algorithm in general, while local explainability relates to how the algorithm arrived at a particular decision. One important way of being able to test an algorithm, even when it’s a black box, is to be able to play with inputting different parameters and seeing the result.

Deep Mind (owned by Google) is looking at how explanations can be generated from intermediate stages of the operation of machine learning algorithms.

Explanation can be provided at many levels. At the top level this might be a simple verbal summary. At the next level it might be having access to a range of graphical and numerical representations with the ability to run 'what if' queries. At a deeper level, text and tables might show the procedures that the algorithm used.  Deeper still, would be the mathematics underlying the algorithm. Lastly, the code that runs the algorithm should be inspectable.  I would say that a good explanation is dependent on understanding what the user wants to know - in other words, it is not just a function of the decision making process but also a function of the user’s actual and desired state of knowledge.


Without these types of explanation, algorithms such as the one used by the US company Compas to predict rates of  recidivism, are difficult to trust. 

It is easy to feel that an algorithm is unfair or can’t be trusted. If it cannot provide sufficiently  good explanations, and claims about it are not scientifically substantiated, then it is right to be sceptical about its decisions. 

Most of David’s points apply more broadly than to artificial intelligence and robots.  They are general principles applying to the transparency, accountabilityand user acceptance of any system.  Trust and trustworthiness are everything.

See more of David’s work on his personal webpage at http://www.statslab.cam.ac.uk/Dept/People/Spiegelhalter/davids.html ,      . And read his new book “The Art of Statistics: Learning from Data”, available shortly.

David Spiegelhalter

– Ethical themes in artificial Intelligence and robotics

Useful categorisation of ethical themes

I was at the seminar the other day where I was fortunate enough to encounter Josephine Young from www.methods.co.uk (who mainly do public sector work in the UK).


Josie recently carried out an analysis of the main themes relating to ethics and AI that she found in a variety of sources related to this topic. I have reported these themes below with a few comments. 
Many thanks, Josie for this really useful and interesting work.



THEMES

(Numbers in brackets reflect the number of times this issue was identified).

Data

Data treatment

Data treatment, focus on bias identification (10)
Interrogate the data (9)

Data collection / Use of personal data

Keep data secure (3)
Personal privacy – access, manage and control of personal data (1, 5, 6)
Use data and tools which have the minimum intrusion necessary – privacy (3)
Transparency of data/meta data collection and usage (8)
Self-disclosure and changing the algorithm’s assumptions (10)

Data models

Awareness of bias in data and models (8)
Create robust data science models – quality, representation of demographics (3)
Practice understanding of accuracy – transparency (8)

robotethics.co.uk comment on data: Trying to structure this a little, the themes might be categorised into [1] data ownership and collection (who can collect what data, when and for what purpose), [2] data storage and security (how is the data securely stored and controlled without loss and any un-permitted access [3] data processing (what are the permitted operations on the data and the unbiased / reasonable inferences / models that can be derived from it) and [4] data usage (what applications and processes can use the data or any inferences made from it).


Impact

Safety – verifiable (1)
Anticipate the impacts the might arise – economic, social, environmental etc. (4)
Evaluate impact of algorithms in decision-making and publish the results (2)
Algorithms are rated on a risk scale based on impact on individual (2)
Act using these Responsible Innovation processes to influence the direction and trajectory of research (4)

robotethics.co.uk comment on impact: Impact is about assessing the positive and negative effects of AI in the future, whether that be in the short, medium or long term. There is also the question of who is impacted as it is quite possible that the impact of any particular AI product or service might impact one group of people positively and another negatively. Therefore a framework of effect x timescale x affected persons/group might make a start on providing some structure for assessing impact.


Purpose

Non-subversion – power conferred to AI should respect and improve social and civic processes (1)
Reflect on the purpose, motivations, implications and uncertainties this research may bring (4)
Ensure augmented – not just artificial – AI (8)
Purpose and ecology for the AI system (10)
Human control – choose how and whether to delegate decisions to AI (1)
Backwards compatibility and versioning (8)

robotethics.co.uk comment on purpose: Clearly the intent behind any AI development should be to confer a net benefit on the individual and/or the society generally. The intent should never be to cause harm – even drone warfare is, in principle, justified in terms of conferring a clear net benefit. But this again raises the question of net benefit to whom exactly, how large that benefit is when compared to any downside, and how certain it is that the benefit will materialise (without any unanticipated harmful consequences). It is a matter of how strong and certain the argument is for justifying the intent behind building or deploying a particular AI product or service.


Transparency

Transparency for how AI systems make decisions (7)
Be as open and accountable as possible – provide explanations, recourse, accountability (3)
Failure transparency (1)
Responsibility and accountability for explaining how AI systems work (7)
Awareness and plan for audit train (8)
Publish details describing the data used to train AI, with assumptions and risk assessment – including bias (2)
A list of inputs used by an algorithm to make a decision should be published (2)
Every algorithm should be accompanied with a description of function, objective and intended impact (2)
Every algorithm should have an identical sandbox version for auditing (2)

robotethics.co.uk comment on transparency: Transparency and accountability are closely related but can be separated out. Transparency is about determining how or why (e.g. how or why an AI made a certain decision) whereas accountability is about determining who is responsible. Having transparency may well help in establishing accountability but they are different. The problem for AI is that, by normal human standards, responsibility resides with the autonomous decision-making agent so long as they are regarded as having ‘capacity’ (e.g. they are not a child or insane) and even then, there can be mitigating circumstances (provocation, self-defence etc.). We are a long way from regarding AIs as having ‘capacity’ in the sense of being able to make their own ethical judgements, so in the short to medium term, the accountability must be traceable to a human, or other corporate, agent. The issue of accountability is further complicated in cases where people and AIs are cooperatively engaged in the same task, since there is human involvement in both the design of the AI and its operational use.


Civic rights

A named member of staff is formally responsible for the algorithm’s actions and decisions (2)
Judicial transparency – auditible by humans (1)
3rd parties that run algorithms on behalf of public sector should be subject to same principles as government algorithms (2)
Intelligibility and fairness (6)
Dedicated insurance scheme, to provide compensation if negative impact (2)
Citizens must be informed when their treatment has been decided/informed by an algorithm (2)
Liberty and privacy – use of personal data should not, or not be perceived to curtail personal liberities (1)
Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems (7)

robotethics.co.uk comment on civic rights: It seems clear that an AI should have no more license to contravene a person’s civil liberties or human rights than another person or corporate entity would. Definitions of human rights are not always clear-cut and differ from place to place. In human society this is dealt with by defaulting to local laws and cultural norms. It seems likely that a care robot made in Japan but operating in, say, the UK would have to operate according to the local laws, as would apply to any other person, product or service.


Highest purpose of AI

Shared prosperity – economic prosperity shared broadly to benefit all of humanity (1)
Flourishing alongside AI (6)
Prioritise the maximum benefit to humanity and the natural environment (7)
Shared benefit – technology should benefit and empower as many people as possible (1)
Purpose of AI should be human flourishing (1)
AI should be developed for the common good (6)
Beneficial intelligence (1)
Compatible with human dignity, rights, freedoms and cultural diversity (1, 5)
Align values and goals with human values (1)
AI will prevent harm (5)
Start with clear user need and public benefit (3)
Embody highest ideals of human rights (7)

robotethics.co.uk comment on the higher purpose of AI: This seems to address themes of human flourishing, equality, values and again touches on rights. It focuses mainly on, and details, the potential benefits and how these are distributed. These can be slotted into the frameworks already set out above.


Negative consequences / Crossing the ‘line’

An AI arms race should be permitted (1)
Identify and address cybersecurity risks (8)
Confronting the power to destroy (6)

robotethics.co.uk comment on the negative consequences of AI: The main threats are set out to be in relation to weapons, cyber-security and the existential risks posed by AIs that cease to be controlled by human agency. There are also many more subtle and shorter term risks such as bias in models and decision making addressed elsewhere. As with benefits, these can be slotted into the frameworks already set out above.


User

Consider the marginal user (9)
Collaborate with humans – rather than displace them (5)
Marginal user and participation (10)
Address job displacement implications (8)

robotethics.co.uk comment on user: This is mainly about the social implications of AI and the risks to individuals in relation to jobs and becoming marginalised. These implications seem likely to arise in the short to medium term and given their potential scale, there seems a comparative paucity of attention being paid to them by governments, especially in the UK where Brexit dominates the political agenda. Little attempt seems to be being made to consider the significance of AI in relation to the more habitual political concerns of migration and trade.


AI Industry

AI researchers <-> policymakers (1)
Establish industry partnerships (9)
Responsibility of designers and builders for moral implications (1, 5)
Establish industry partnerships (9)
Culture of trust and transparency between researchers and developers (1)
Resist the ‘race’ – no more ‘move fast and break things’ mentality (1)

robotethics.co.uk comment on AI industry: The industry players that are building AI products and services have a pivotal role to play in their ethical development and deployment. In addition to design and manufacture, this affects education and training, regulation and monitoring of the development of AI systems, their marketing and constraints on their use. AI is likely to be used throughout the supply chain of other products and services and AI components will become increasingly integrated with each other into more and more powerful systems. The need to create policy, regulate, certify, train and licence the industry creating AI products and services needs to be addressed more urgently given the pace of technological development.


Public dialogue

Engage – opening up such work to broader deliberation in an inclusive way (4)
Education and awareness of public (7)
Be alert to public perceptions (3)

robotethics.co.uk comment on public dialogue: At present, public debate on AI is often focussed on the activities of the big players and their high profile products such as Amazon Echo, Google Home, and Apple’s Siri. These give clues as to some of the ethical issues that require public attention, but there is a lot more AI development going on in the background. Given the potentially large and fast pace of societal impacts of AI, there needs to be greater public awareness and debate, not least so that society can be prepared and adjust other systems (such as taxation, benefits, universal income etc.) to absorb the impacts.


Interface design

Representation of AI system, user interface design (10)

robotethics.co.uk comment on interface design: With AIs capable of machine learning they are developing knowledge and skills in similar ways to how people do, and just like people, they often cannot explain how they do things or arrive at some judgement or decision. The ways in which people and AIs will interface and interact is as complex a topic as how people interact with each other. Can we ever know what another person is really thinking or whether the image they present of themselves is accurate. If AIs become even half as complex as people, able to integrate knowledge and skills from many different sources, able to express (if not actually feel) emotions, able to reason with super-human logic, able to communicate instantaneously with other AIs, there is no knowing how people and AIs will ‘interface’. Just as with computers that have become both tools for people to use and constraints on human activity (‘I’m sorry but the computer will not let me do that’) the relationships will be complex, especially as computer components become implanted in the human body and not just carried on the wrist. It seems more likely that the relationship will be cooperative rather than competitive or one in which AIs come to dominate.


The original source material from Josie, (who gave me permission to reference this material) can be found at:

https://docs.google.com/document/d/1LrBk-LOEu4LwnyUg8i5oN3ZKjl55aDpL6l1BxVcHIi8/edit


See other work by Josie Young: https://methods.co.uk/blog/different-ai-terms-actually-mean/

– IEEE Consultation on Ethically Aligned Design

A Response Submitted for robotethics.co.uk

A summary of the IEEE document Ethically Aligned Design (Version 2) can be found below. Responses to this document were invited by 7th May 2018.


Response to Ethically Aligned Design Version 2 (EADv2)
Rod Rivers, Socio-Technical Systems, Cambridge, UK
March 2018 (rod.rivers@ieee.org)

I take a perspective from philosophy, phenomenology and psychology and attempt to inject thoughts from these disciplines.

Social Sciences: EADv2 would benefit from more input from the social sciences. Many of the concepts discussed (e.g. norms, rights, obligations, wellbeing, values, affect, responsibility) have been extensively investigated and analysed within the social sciences (psychology, social psychology, sociology, anthropology, economics etc.). This knowledge could be more fully integrated into EAD. For example, the meaning of ‘development’ to refer to ‘child development’ or ‘moral development’ is not in the glossary.

Human Operating System: The first sentence in EADv2 establishes a perspective looking forward from the present, as use and impact of A/ISs ‘become pervasive’. An additional tack would be to look in more depth at human capability and human ethical self-regulation, and then ‘work backwards’ to fill the gap between current artificial A/IS capability and that of people. I refer to this as the ‘Human Operating System’ (HOS) approach, and suggest that EAD makes explicit, and endorses, exploration of the HOS approach to better appreciate the complexity (and deficiencies) of human cognitive, emotional, physiological and behavioural functions.

Phenomenology: A/ISs can be distinguished from other artefacts because they have the potential to reflect and reason, not just on their own computational processes, but also on the behaviours, and cognitive processes of people. This is what psychologists refer to as ‘theory of mind’ – the capability to reason and speculate on the states of knowledge and intentions of others. Theory of mind can be addressed using a phenomenological approach that attempts to describe, understand and explain from the fully integrated subjective perspective of the agent. Traditional engineering and scientific approaches tend to objectify, separate out elements into component parts, and understand parts in isolation before addressing their integration. I suggest that EAD includes and endorses exploration of a phenomenological approach to complement the engineering approach.

Ontology, epistemology and belief: EADv2 includes the statement “We can assume that lying and deception will be prohibited actions in many contexts” (EADv2 p.45). This example may indicate the danger of slipping into an absolutist approach to the concept of ‘truth’. For example, it is easy to assume that there is only one truth and that the sensory representations, data and results of information processing by an A/IS necessarily constitute an objective ‘truth’. Post-modern constructivist thinking see ‘truth’ as an attribute of the agent (albeit constrained by an objective reality) rather than as an attribute of states of the world. The validity of a proposition is often re-defined in real time as the intentions of agents change. It is important to establish some clarity over these types of epistemological issues, not least in the realm of ethical judgments. I suggest that EAD note and encourage greater consideration of these epistemological issues.

Embodiment, empathy and vulnerability: It has been argued that ethical judgements are rooted in physiological states (e.g. emotional reactions to events), empathy and the experience of vulnerability (i.e. exposure to pain and suffering). EADv2 does not currently explicitly set out how ethical judgements can be made by an A/IS in the absence of these human subjective states. Although EAD mentions emotions and affective computing (and an affective computing committee) this is almost always in relation to human emotions. The more philosophical question of judgement without physical embodiment, physiological states, emotions, and a subjective understanding of vulnerability is not addressed.

Terminology / Language / Glossary: In considering ethics we are moving from amoral mechanistic understanding of cause and effect to value-laden, intention driven notions of causality. This requires inclusion of more mentalistic terminology. The glossary should reflect this and could form the basis of a language for the expression of ideas that transcend both artificial and human intelligent systems (i.e. that is substrate independent). In a fuller response, I discuss terms already used in EADv2 (e.g. autonomous, intelligent, system, ethics, intention formation, independent reasoning, learning, decision-making, principles, norms etc.), and terms that are either not used or might be elaborated (e.g. umwelt, ontology, epistemology, similarity, truth-value, belief, decision, intention, justification, mind, power, the will).



IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems

For Public Discussion – By 7th May 2018 (consultation now closed)

Version 2 of this report is available by registering at:
http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html

Public comment on version 1 of this document was invited by March 2017 to encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. The document was created by committees of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, comprised of over one hundred global thought leaders and experts in artificial intelligence, ethics, and related issues.

Version 2 presents the following principles/recommendations:

Candidate Recommendation 1 – Human Rights
To best honor human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.

Candidate Recommendation 2 – Prioritizing Wellbeing
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.

Candidate Recommendation 3 – Accountability
To best address issues of responsibility and accountability:
1. Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
3. Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/owners of A/IS should register key, high-level parameters, including:

• Intended use
• Training data/training environment (if applicable)
• Sensors/real world data sources
• Algorithms
• Process graphs
• Model features (at various levels)
• User interfaces
• Actuators/outputs
• Optimization goal/loss function/reward function

Standard Reference for Version 2
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.

Report, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017, December 2017, 136 pages

http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html