Ethical systems for Artificial Intelligence

When judging a technological system for its ethics is fundamental to revisit the process of conception. What helps shape the active behavior of the system is a series of premises and pre-established rules, associated with the environmental culture and an ongoing routine of control, update and support.  

This fusion of components, if observed carefully, helps us realize that the same goes for a social system composed by humans. In the early moments of his life, one individual is exposed to a set of family values, then this “original setting” is constantly updated by the ethical premises and values of the school he’s in, the neighborhood he lives in, his friends and others social networks.

Artificial Intelligence (AI) is basically a repository of information and norms created by a group of people, which makes it a kind of autonomous system strictly based on pre-established protocols and information. As a concept, autonomy has been one of the main topics of moral debates and is usually defined as the ability of decision making based on available information. The premise of autonomy doesn’t exclude certain historical traits to be carried over in AI devices, after all when it comes to ethics their design is deeply embedded with the environment. Maybe the greatest difference between AI and us lies in the fact that humans can change certain behaviors, adjusting them to a certain version of wellbeing. The machine cannot.

If we apply this logic to corporate business, it will be mandatory to observe the group of individuals who are part of the work’s social system, which is generally informed by people’s behavior. From this, we are able to parametrize the ethics of the environment. When this analysis is not carried over, we face the risk of ethical incongruences which then transfers and is turned into an automated routine. To exemplify this we can think of a social system not originally diverse that gets an input and is updated to be more diversified: this can generate a series of problems, since the system did not have diversity embedded in its original design and was updated with diverging traits. The result is systemic failure - of the original design, not the input. 

To act on this we can form committees tasked to understand the behavioral patterns and establish premises - and also to properly implement them. This is fundamental to a time when humans and machines are in constant interaction. These committees will test hypotheses, try to predict malfunctions and monitor the evolution of the device while active on the system. A system of governance is mandatory to establish an equal amount of participation on how these systems work, how they are designed and built. Technological systems are task repetitive-based, whether it be those learning abilities in the original design or those from executed tasks. 

Ethical systems, technological or social, are composed of similar things: premises, rules, environments, culture, control, update and support. When these components are not properly integrated and supervised we face the risk of behavioral malfunctions. So to build an AI device we have to be acutely aware of ethical aspects and the influence of the environment this device might be under when functioning. It is the creator’s responsibility. 

Originally published in MIT Technology Review Brasil on January 6th, 2022.

Ricardo Cappra