Home » Design

Category Archives: Design

Subscribe via Email

Join 2 other subscribers.

– Ethical themes in artificial Intelligence and robotics

Useful categorisation of ethical themes

I was at the seminar the other day where I was fortunate enough to encounter Josephine Young from www.methods.co.uk (who mainly do public sector work in the UK).


Josie recently carried out an analysis of the main themes relating to ethics and AI that she found in a variety of sources related to this topic. I have reported these themes below with a few comments. 
Many thanks, Josie for this really useful and interesting work.



THEMES

(Numbers in brackets reflect the number of times this issue was identified).

Data

Data treatment

Data treatment, focus on bias identification (10)
Interrogate the data (9)

Data collection / Use of personal data

Keep data secure (3)
Personal privacy – access, manage and control of personal data (1, 5, 6)
Use data and tools which have the minimum intrusion necessary – privacy (3)
Transparency of data/meta data collection and usage (8)
Self-disclosure and changing the algorithm’s assumptions (10)

Data models

Awareness of bias in data and models (8)
Create robust data science models – quality, representation of demographics (3)
Practice understanding of accuracy – transparency (8)

robotethics.co.uk comment on data: Trying to structure this a little, the themes might be categorised into [1] data ownership and collection (who can collect what data, when and for what purpose), [2] data storage and security (how is the data securely stored and controlled without loss and any un-permitted access [3] data processing (what are the permitted operations on the data and the unbiased / reasonable inferences / models that can be derived from it) and [4] data usage (what applications and processes can use the data or any inferences made from it).


Impact

Safety – verifiable (1)
Anticipate the impacts the might arise – economic, social, environmental etc. (4)
Evaluate impact of algorithms in decision-making and publish the results (2)
Algorithms are rated on a risk scale based on impact on individual (2)
Act using these Responsible Innovation processes to influence the direction and trajectory of research (4)

robotethics.co.uk comment on impact: Impact is about assessing the positive and negative effects of AI in the future, whether that be in the short, medium or long term. There is also the question of who is impacted as it is quite possible that the impact of any particular AI product or service might impact one group of people positively and another negatively. Therefore a framework of effect x timescale x affected persons/group might make a start on providing some structure for assessing impact.


Purpose

Non-subversion – power conferred to AI should respect and improve social and civic processes (1)
Reflect on the purpose, motivations, implications and uncertainties this research may bring (4)
Ensure augmented – not just artificial – AI (8)
Purpose and ecology for the AI system (10)
Human control – choose how and whether to delegate decisions to AI (1)
Backwards compatibility and versioning (8)

robotethics.co.uk comment on purpose: Clearly the intent behind any AI development should be to confer a net benefit on the individual and/or the society generally. The intent should never be to cause harm – even drone warfare is, in principle, justified in terms of conferring a clear net benefit. But this again raises the question of net benefit to whom exactly, how large that benefit is when compared to any downside, and how certain it is that the benefit will materialise (without any unanticipated harmful consequences). It is a matter of how strong and certain the argument is for justifying the intent behind building or deploying a particular AI product or service.


Transparency

Transparency for how AI systems make decisions (7)
Be as open and accountable as possible – provide explanations, recourse, accountability (3)
Failure transparency (1)
Responsibility and accountability for explaining how AI systems work (7)
Awareness and plan for audit train (8)
Publish details describing the data used to train AI, with assumptions and risk assessment – including bias (2)
A list of inputs used by an algorithm to make a decision should be published (2)
Every algorithm should be accompanied with a description of function, objective and intended impact (2)
Every algorithm should have an identical sandbox version for auditing (2)

robotethics.co.uk comment on transparency: Transparency and accountability are closely related but can be separated out. Transparency is about determining how or why (e.g. how or why an AI made a certain decision) whereas accountability is about determining who is responsible. Having transparency may well help in establishing accountability but they are different. The problem for AI is that, by normal human standards, responsibility resides with the autonomous decision-making agent so long as they are regarded as having ‘capacity’ (e.g. they are not a child or insane) and even then, there can be mitigating circumstances (provocation, self-defence etc.). We are a long way from regarding AIs as having ‘capacity’ in the sense of being able to make their own ethical judgements, so in the short to medium term, the accountability must be traceable to a human, or other corporate, agent. The issue of accountability is further complicated in cases where people and AIs are cooperatively engaged in the same task, since there is human involvement in both the design of the AI and its operational use.


Civic rights

A named member of staff is formally responsible for the algorithm’s actions and decisions (2)
Judicial transparency – auditible by humans (1)
3rd parties that run algorithms on behalf of public sector should be subject to same principles as government algorithms (2)
Intelligibility and fairness (6)
Dedicated insurance scheme, to provide compensation if negative impact (2)
Citizens must be informed when their treatment has been decided/informed by an algorithm (2)
Liberty and privacy – use of personal data should not, or not be perceived to curtail personal liberities (1)
Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems (7)

robotethics.co.uk comment on civic rights: It seems clear that an AI should have no more license to contravene a person’s civil liberties or human rights than another person or corporate entity would. Definitions of human rights are not always clear-cut and differ from place to place. In human society this is dealt with by defaulting to local laws and cultural norms. It seems likely that a care robot made in Japan but operating in, say, the UK would have to operate according to the local laws, as would apply to any other person, product or service.


Highest purpose of AI

Shared prosperity – economic prosperity shared broadly to benefit all of humanity (1)
Flourishing alongside AI (6)
Prioritise the maximum benefit to humanity and the natural environment (7)
Shared benefit – technology should benefit and empower as many people as possible (1)
Purpose of AI should be human flourishing (1)
AI should be developed for the common good (6)
Beneficial intelligence (1)
Compatible with human dignity, rights, freedoms and cultural diversity (1, 5)
Align values and goals with human values (1)
AI will prevent harm (5)
Start with clear user need and public benefit (3)
Embody highest ideals of human rights (7)

robotethics.co.uk comment on the higher purpose of AI: This seems to address themes of human flourishing, equality, values and again touches on rights. It focuses mainly on, and details, the potential benefits and how these are distributed. These can be slotted into the frameworks already set out above.


Negative consequences / Crossing the ‘line’

An AI arms race should be permitted (1)
Identify and address cybersecurity risks (8)
Confronting the power to destroy (6)

robotethics.co.uk comment on the negative consequences of AI: The main threats are set out to be in relation to weapons, cyber-security and the existential risks posed by AIs that cease to be controlled by human agency. There are also many more subtle and shorter term risks such as bias in models and decision making addressed elsewhere. As with benefits, these can be slotted into the frameworks already set out above.


User

Consider the marginal user (9)
Collaborate with humans – rather than displace them (5)
Marginal user and participation (10)
Address job displacement implications (8)

robotethics.co.uk comment on user: This is mainly about the social implications of AI and the risks to individuals in relation to jobs and becoming marginalised. These implications seem likely to arise in the short to medium term and given their potential scale, there seems a comparative paucity of attention being paid to them by governments, especially in the UK where Brexit dominates the political agenda. Little attempt seems to be being made to consider the significance of AI in relation to the more habitual political concerns of migration and trade.


AI Industry

AI researchers <-> policymakers (1)
Establish industry partnerships (9)
Responsibility of designers and builders for moral implications (1, 5)
Establish industry partnerships (9)
Culture of trust and transparency between researchers and developers (1)
Resist the ‘race’ – no more ‘move fast and break things’ mentality (1)

robotethics.co.uk comment on AI industry: The industry players that are building AI products and services have a pivotal role to play in their ethical development and deployment. In addition to design and manufacture, this affects education and training, regulation and monitoring of the development of AI systems, their marketing and constraints on their use. AI is likely to be used throughout the supply chain of other products and services and AI components will become increasingly integrated with each other into more and more powerful systems. The need to create policy, regulate, certify, train and licence the industry creating AI products and services needs to be addressed more urgently given the pace of technological development.


Public dialogue

Engage – opening up such work to broader deliberation in an inclusive way (4)
Education and awareness of public (7)
Be alert to public perceptions (3)

robotethics.co.uk comment on public dialogue: At present, public debate on AI is often focussed on the activities of the big players and their high profile products such as Amazon Echo, Google Home, and Apple’s Siri. These give clues as to some of the ethical issues that require public attention, but there is a lot more AI development going on in the background. Given the potentially large and fast pace of societal impacts of AI, there needs to be greater public awareness and debate, not least so that society can be prepared and adjust other systems (such as taxation, benefits, universal income etc.) to absorb the impacts.


Interface design

Representation of AI system, user interface design (10)

robotethics.co.uk comment on interface design: With AIs capable of machine learning they are developing knowledge and skills in similar ways to how people do, and just like people, they often cannot explain how they do things or arrive at some judgement or decision. The ways in which people and AIs will interface and interact is as complex a topic as how people interact with each other. Can we ever know what another person is really thinking or whether the image they present of themselves is accurate. If AIs become even half as complex as people, able to integrate knowledge and skills from many different sources, able to express (if not actually feel) emotions, able to reason with super-human logic, able to communicate instantaneously with other AIs, there is no knowing how people and AIs will ‘interface’. Just as with computers that have become both tools for people to use and constraints on human activity (‘I’m sorry but the computer will not let me do that’) the relationships will be complex, especially as computer components become implanted in the human body and not just carried on the wrist. It seems more likely that the relationship will be cooperative rather than competitive or one in which AIs come to dominate.


The original source material from Josie, (who gave me permission to reference this material) can be found at:

https://docs.google.com/document/d/1LrBk-LOEu4LwnyUg8i5oN3ZKjl55aDpL6l1BxVcHIi8/edit


See other work by Josie Young: https://methods.co.uk/blog/different-ai-terms-actually-mean/

– IEEE Consultation on Ethically Aligned Design

A Response Submitted for robotethics.co.uk

A summary of the IEEE document Ethically Aligned Design (Version 2) can be found below. Responses to this document were invited by 7th May 2018.


Response to Ethically Aligned Design Version 2 (EADv2)
Rod Rivers, Socio-Technical Systems, Cambridge, UK
March 2018 (rod.rivers@ieee.org)

I take a perspective from philosophy, phenomenology and psychology and attempt to inject thoughts from these disciplines.

Social Sciences: EADv2 would benefit from more input from the social sciences. Many of the concepts discussed (e.g. norms, rights, obligations, wellbeing, values, affect, responsibility) have been extensively investigated and analysed within the social sciences (psychology, social psychology, sociology, anthropology, economics etc.). This knowledge could be more fully integrated into EAD. For example, the meaning of ‘development’ to refer to ‘child development’ or ‘moral development’ is not in the glossary.

Human Operating System: The first sentence in EADv2 establishes a perspective looking forward from the present, as use and impact of A/ISs ‘become pervasive’. An additional tack would be to look in more depth at human capability and human ethical self-regulation, and then ‘work backwards’ to fill the gap between current artificial A/IS capability and that of people. I refer to this as the ‘Human Operating System’ (HOS) approach, and suggest that EAD makes explicit, and endorses, exploration of the HOS approach to better appreciate the complexity (and deficiencies) of human cognitive, emotional, physiological and behavioural functions.

Phenomenology: A/ISs can be distinguished from other artefacts because they have the potential to reflect and reason, not just on their own computational processes, but also on the behaviours, and cognitive processes of people. This is what psychologists refer to as ‘theory of mind’ – the capability to reason and speculate on the states of knowledge and intentions of others. Theory of mind can be addressed using a phenomenological approach that attempts to describe, understand and explain from the fully integrated subjective perspective of the agent. Traditional engineering and scientific approaches tend to objectify, separate out elements into component parts, and understand parts in isolation before addressing their integration. I suggest that EAD includes and endorses exploration of a phenomenological approach to complement the engineering approach.

Ontology, epistemology and belief: EADv2 includes the statement “We can assume that lying and deception will be prohibited actions in many contexts” (EADv2 p.45). This example may indicate the danger of slipping into an absolutist approach to the concept of ‘truth’. For example, it is easy to assume that there is only one truth and that the sensory representations, data and results of information processing by an A/IS necessarily constitute an objective ‘truth’. Post-modern constructivist thinking see ‘truth’ as an attribute of the agent (albeit constrained by an objective reality) rather than as an attribute of states of the world. The validity of a proposition is often re-defined in real time as the intentions of agents change. It is important to establish some clarity over these types of epistemological issues, not least in the realm of ethical judgments. I suggest that EAD note and encourage greater consideration of these epistemological issues.

Embodiment, empathy and vulnerability: It has been argued that ethical judgements are rooted in physiological states (e.g. emotional reactions to events), empathy and the experience of vulnerability (i.e. exposure to pain and suffering). EADv2 does not currently explicitly set out how ethical judgements can be made by an A/IS in the absence of these human subjective states. Although EAD mentions emotions and affective computing (and an affective computing committee) this is almost always in relation to human emotions. The more philosophical question of judgement without physical embodiment, physiological states, emotions, and a subjective understanding of vulnerability is not addressed.

Terminology / Language / Glossary: In considering ethics we are moving from amoral mechanistic understanding of cause and effect to value-laden, intention driven notions of causality. This requires inclusion of more mentalistic terminology. The glossary should reflect this and could form the basis of a language for the expression of ideas that transcend both artificial and human intelligent systems (i.e. that is substrate independent). In a fuller response, I discuss terms already used in EADv2 (e.g. autonomous, intelligent, system, ethics, intention formation, independent reasoning, learning, decision-making, principles, norms etc.), and terms that are either not used or might be elaborated (e.g. umwelt, ontology, epistemology, similarity, truth-value, belief, decision, intention, justification, mind, power, the will).



IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems

For Public Discussion – By 7th May 2018 (consultation now closed)

Version 2 of this report is available by registering at:
http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html

Public comment on version 1 of this document was invited by March 2017 to encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. The document was created by committees of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, comprised of over one hundred global thought leaders and experts in artificial intelligence, ethics, and related issues.

Version 2 presents the following principles/recommendations:

Candidate Recommendation 1 – Human Rights
To best honor human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.

Candidate Recommendation 2 – Prioritizing Wellbeing
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.

Candidate Recommendation 3 – Accountability
To best address issues of responsibility and accountability:
1. Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
3. Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/owners of A/IS should register key, high-level parameters, including:

• Intended use
• Training data/training environment (if applicable)
• Sensors/real world data sources
• Algorithms
• Process graphs
• Model features (at various levels)
• User interfaces
• Actuators/outputs
• Optimization goal/loss function/reward function

Standard Reference for Version 2
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.

Report, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017, December 2017, 136 pages

http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html