Ethical AI: A Framework Based on Fundamental Rights and Human Values

Gurkan Coskun
6 min readFeb 8, 2024

--

In ethical discussions, the terms “morality” and “ethics” are often used. The term “morality” refers to concrete, factual patterns of behavior, customs and conventions that can be found in particular cultures, groups or individuals at a given time. The term “ethics” refers to the systematic and academic evaluation of such concrete actions and behaviors. Ethics is generally concerned with questions such as “What is good action?”, “What is the value of human life?”, “What is justice?” or “What is the good life?”. There are four main areas of research in academic ethics. Meta-ethics is mostly concerned with the question of the meaning and reference of normative sentences and how their truth-values, if any, can be determined. Normative ethics is the practical way of determining a moral course of action by examining standards of right and wrong action and assigning a value to certain actions. Descriptive ethics aims at the empirical study of people’s moral behavior and beliefs. Applied ethics, on the other hand, is concerned with what we are obliged to do in a particular (often historically novel) situation or in a particular (often historically unprecedented) domain of action possibilities. AI ethics is also often seen as an example of applied ethics and focuses on the normative issues raised by the design, development and use of AI.

1. Main Approach: Fundamental Rights

On the European Union side, an approach to AI ethics is adopted based on fundamental rights enshrined in the EU Treaties, the EU Charter and international human rights law. Respect for fundamental rights within the framework of democracy and the rule of law provides the necessary foundations for identifying abstract ethical principles and values that can be operationalized in the context of AI.

Where fundamental rights converge is respect for human dignity. Essentially, fundamental rights are based on a “human-centered approach”. This approach recognizes that the human person has a unique and inalienable moral primacy in the civil, political, economic and social spheres. By ensuring that fundamental rights are respected, the human-centered approach seeks to ensure that human values are embedded in the AI life cycle. This also requires a sustainable approach that takes into account nature and other living beings that are part of the human ecosystem, as well as the development of future generations.

The OECD’s principles on trustworthy AI also embrace a combination of human-centered values and justice. Accordingly, AI actors should establish effective mechanisms to respect human rights and democratic values, including freedom, dignity, autonomy, privacy, non-discrimination, fairness and social justice, diversity and fundamental labor rights throughout the AI life cycle.

2. Ethical Principles for Artificial Intelligence

Without imposing a hierarchy, we list the principles here below in manner that mirrors the order of appearance of the fundamental rights upon which they are based in the EU Charter. These are the principles of:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

Respect for human autonomy is strongly associated with the right to human dignity and liberty (reflected in Articles 1 and 6 of the Charter). The prevention of harm is strongly linked to the protection of physical or mental integrity (reflected in Article 3). Fairness is closely linked to the rights to Non-discrimination, Solidarity and Justice (reflected in Articles 21 and following). Explicability and Responsibility are closely linked to the rights relating to Justice (as reflected in Article 47).

a. Respect for Human Autonomy

People interacting with AI systems should be able to have full and effective autonomy over themselves and make decisions about themselves. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. They should be designed to augment, complement and strengthen human cognitive, social and cultural skills. In the sharing of functions between humans and AI systems, human-centered design principles should be followed and human freedom of choice should be allowed. This means securing human oversight over work processes in AI systems. AI systems may also fundamentally change the work sphere. It should support humans in the working environment, and aim for the creation of meaningful work.

b. Prevention of Damage

AI systems must not harm or otherwise adversely affect humans. This includes protecting human dignity as well as human mental and physical integrity. In this respect, AI systems and the environments in which they operate must be secure and safe. They must be technically robust and not open to malicious use. Vulnerable people, such as children, should be given more attention and consideration in the development, deployment and use of AI systems. Consideration should also be given to situations where AI systems may cause adverse impacts due to power or information asymmetries, such as between employers and employees, businesses and consumers, governments and citizens. The principle of prevention of harm applies to nature and all living beings, not just humans.

c. Fairness

Justice and equity are complex and difficult to define. This is because what is fair differs across cultures and can change depending on practice. In this context, determining appropriate ways to use digital technology is deeply influenced by the lawmaker’s own understanding of social justice.

The substantive dimension of justice refers to the equitable and fair distribution of benefits and costs, ensuring that individuals and communities are not subject to unfair prejudice, discrimination and stigmatization. In this way, equality of opportunity in terms of access to education, goods, services and technology can also be realized. Again, the use of AI systems should not lead to people being deceived or their freedom of choice being unfairly restricted. In addition, the principle of proportionality between means and ends must be respected and a fair balance must be struck between conflicting interests. Measures taken to achieve an end (e.g. data extraction measures applied to perform an AI optimization function) should be limited to what is strictly necessary. At the same time, when there is more than one possible measure to achieve an objective, the one that is least contrary to fundamental rights and ethical norms should be preferred.

The procedural dimension of justice requires the ability to challenge and seek effective redress against decisions made by AI systems and the humans who operate them. In order to do so, the person or organization responsible for the decision must be identifiable and the decision-making processes must be explainable.

d. Explicability

Explicability means that processes should be transparent, that the capabilities and purpose of AI systems should be clearly communicated to the interlocutors, that decisions made by the system should — to the extent possible — be explained to those directly and indirectly affected by that decision.

The extent to which explicability is needed is often directly linked to the severity of the consequences if the output generated by the system is incorrect or inaccurate. For example, the decisions produced by an AI system giving shopping advice are much less serious than the decision made by an AI system assessing whether a convict should be released on probation.

e. Conflicts between Principles

There may be conflicts between the above principles. For example, the practice of “predictive policing” helps to reduce crime, but requires surveillance activities that intrude on a person’s liberty and privacy. In this situation, the principle of harm reduction conflicts with the principle of human autonomy. It is very difficult for AI practitioners to find the right solution based on the above principles. Therefore, these principles can be considered as abstract ethical prescriptions. To overcome such conflicts, it is important to determine which principle takes precedence depending on the situation. Trade-offs and balancing mechanisms can be used to resolve such ethical dilemmas. The question is how the trade-offs should be made. Prioritising or balancing one of these conflicting principles over the other depends on the “context” and the values that prevail in that context. This problem has to be solved in a transparent, justifiable and evidence-based manner, within the rules of logic. However, some fundamental rights and related principles, such as human dignity, are absolute and should not be subject to this kind of trade-off and balancing.

--

--

Gurkan Coskun
Gurkan Coskun

Written by Gurkan Coskun

Lawyer, PhD. Writing about Digital Law and AI.

No responses yet