Critical thinking principle in the algorithmic rationality

My educational background in data processing, which is linked to the late 1990’s, has led me into a journey entirely dedicated to technology information, a field belonging to technoscience, which influenced me in my professional choices as well as in businesses and social causes I got involved with up until this point. But what has been attracting me lately is something beyond this field: the relevance of critical thinking in the Age of Information. 

With the exponential growth in the volume, variety and speed of data, a Big Data phenomena, it became obvious that this rapid technological advance was restricted to a few groups - understandably, because it is a field which requires mastering specific terminologies and technical skills. This sort of segmentation ended up limiting the scope of all advances made - it limited the number of people as well as imposed restrictions on the diversity of social profiles involved in these projects -, and from that results that all the thinking behind the implementation and execution of information technology in social and businesses environments ends up being also restricted (or limited). This creates a paradox, because in these times we’re living in, intelligent machines are one with us and they function analytically, but we still don’t.

An artificial intelligence system is made of two main fundamentals: a technical one, which is composed of codes and technological resources that make possible the automatic and systemic reproduction of tasks. The other one is made of predefined rules and tasks that help establish how activities are performed by technological resources. When the rules are inadequately elaborated - whether this has to do with failure in data selection, modeling issues, inconsistency in systemic tasks or incomplete instructions -, the risk of negative results increases. Failures in technical resources are easily traceable, mainly because they usually relate to code writing or mechanical issues. So when an artificial intelligence system fails it is possible to trace the malfunction when its nature is technical or related to manufacturing. When the malfunction has to do with programming, a poorly executed procedure or inadequate configuration, the puzzle becomes harder to solve - after all, lots and lots of people get involved in the creation of a specific rule that, sometimes, lack the proper record of its own creation. In cases like this, the algorithmic rationality is compromised at its very conception.

In translating human rationality into codes to be replicated we use the logic of algorithms. With algorithmic rationality, systems take center stage in decision making. We can find examples of this in our everyday lives all the time: for example, when a recommendation pops up on our social media accounts or we get our demands met by a virtual assistant installed in our homes or cars. On this, researcher Fernanda Bruno remarks:

…as it is now, we judge machines to be more efficient than humans not only to execute some tasks, but even to process and create information and to decide for us what is reliable and what is not. 

Nevertheless, we have to remember that these intelligent algorithms are not fully responsible for their actions. We are only transferring to them some of our tasks. The ones that should respond for such actions are the creators. Fernanda Bruno continues:

Our ideals of objectivity originate in the idea of mechanization and automation, especially since the 19th century. The ascendance of the algorithm as a perfect model for decision making is related to an epistemological change that marks the transition from a rational model based on critical thinking to a model where rationality is based on algorithmic patterns. One thing is clear: this transition is not yet complete; in reality, these two models are justapoxed.

During the pandemic, this paradox was part of the lives of many people, whether they liked it or not. Leaders and governors established rules to ensure the proper function of the social ecosystem. In their decision making process, they used individual premises or based their actions on small groups’ decisions (what may serve as an example of algorithmic rules). From their choices, several technological resources were put at work using algorithmic automation to ensure things would function properly. In several parts of the world, these intelligent systems are still active, tracking down the dissemination of biological viruses and demanding from individuals the constant update of information related to health issues so they can freely move from one place to another. The paradox here may relate to establishing parameters, i.e., defining rules: when adequately managed, this kind of system can help protect people’s wellbeing, however they can also be used as a surveillance device or a tool for social control for those in power. The algorithms, in this case, even if considered intelligent, can be controlling and invasive. 

Decision making in the Age of Information - when technoscience, for its reach and potential, prevails - is easier when compared to defining the proper rules to be encoded in these devices. If the rules help establish the parameters of a system, we have to be aware of ethical issues. After all, when automatically reproduced, these rules will affect a huge number of people simultaneously. 

The responsibilities of this algorithmic rationality have to be traceable and attributed entirely to the creator, for its effects can be insurmountable - even uncontrollable. Critical thinking has a fundamental role in this, and it should be one of the main concerns in the design of an artificial intelligence system. In times where mankind and machines are almost unidimensional, without the resource of critical thinking applied on machine making we’re at risk. When installed in intelligent systems, the critical thinking principle will be a proper guide for action. It may even present the possibility of constant updating and upgrading; and, if necessary, it might be able to completely shut down the system. The fact is: without a rational thinking device installed, these systems will inevitably fail. 

References

Fernanda Bruno. Interview. Tecnopolítica, racionalidade algorítmica e mundo como laboratório. Published on: http://www.ihu.unisinos.br/78-noticias/594012-tecnopolitica-racionalidade-algoritmica-e-mundo-como-laboratorio-entrevista-com-fernanda-bruno 

Edgar Morin. O método. Ética. Sulina, 2017.

Ricardo Cappra. Talk. Algoritmos: você está no controle? Available on: https://youtu.be/wBz-xWPo1Fc. 

Ricardo Cappra. Rastreável: redes, vírus, dados e tecnologias para proteger e vigiar a sociedade. Actual, 2021.

Ricardo Cappra