The Relationship Between Human and Artificial Intelligence
Like any new technology, artificial intelligence is the subject of both hopes and fears and what it covers today presents great challenges (Villani et al., 2018). It also poses deep questions about our own humanity. Will the machine exceed the intelligence of the humans who designed it? What will be the connection between what is called artificial intelligence and our human intelligence?
In recent work (2017), Jean-Gabriel Ganascia answers the first question: it very simply shows that an algorithmic intelligence called artificial (it speaks of “technical artificial intelligence”) is developing, whose performance means that our society is effectively upset because we live in algorithm time (Abiteboul-Dowek, 2017). However, the idea of a strong artificial intelligence that goes beyond the intelligence of humans is not a true or false idea, it is a belief because it is not supported by scientific arguments. It turns out that it is in the interest of those who dominate the digital marketplace to make us believe it and of the media-seeking audience to relay this belief.
Critical thinking and creativity
So before we worry about our skills in this world of suspected artificial intelligence – say, artificial intelligence will upset the economy, we will otherwise name the established scientific elements -, we must sharpen our key human skills: critical thinking and creativity. First, if we are critical of thinking, we should start by asking about the expression IA.
Is the term intelligence relevant for designating computer applications based in particular on machine learning? These algorithms aim to develop systems capable of capturing, processing, and reacting to (massive) information according to mechanisms that adapt to the context or to the data to maximize the chances of achieving the objectives defined for the system.
This behavior, which may seem “intelligent”, was created by humans and has limits linked on the one hand to the current human capacity to define effective machine learning systems, and on the other hand, the availability of massive data for the systems to adapt. The observation is that these systems are more efficient than humans on very specific tasks such as the recognition of sounds, images, or, recently, reading tests like the Stanford Question Answering Data Set.
Does having a better reading test mean being able to understand, inhuman and intelligent sensors, the text read? The statistical capacity to identify responses may seem intelligent, but there is no evidence that it is in the critical and creative sense of humans.
What should we learn today? The skills of the 21st century take into account the pervasiveness of digital technology and the need to strengthen human development both from the point of view of attitudes (tolerance to ambiguity, error tolerance, and risk-taking), in a relation to knowledge and technology.
When in the 1950s Turing proposed a test based on a pure language confrontation between a human and another agent, which could be a machine or another human, it does not target the intelligence of the machine, but the intelligence that we could attribute to it.
If humans believe they are interacting with a human agent, not a machine, the artificial intelligence test is considered successful. And can we be content with a good ability to respond to a human conversation to consider that a machine is intelligent?
Defining human intelligence
If we consider intelligence as the ability to learn (Beckmann, 2006) and learning as an adaptation to the context (Piaget, 1953), it would be possible to consider systems capable of improving their adaptation to the context from the collection and processing of data as being intelligent. However, if we consider intelligence as the “capacity to solve problems or create solutions that have value in a given socio-cultural context” (Gardner and Hatch, 1989, p. 5), under a diversified and dynamic approach, it is more difficult to consider that a system, so adaptive and so massively fed to the data, is it, may make a metacognitive judgment on its process and its products in connection with a given socio-cultural context.
Gardner and Hatch’s definition of human intelligence is very similar to that of creativity as a process for designing a solution deemed new, innovative, and relevant in connection with the precise context of the problem situation (Romero, Lille, and Patino, 2017).
Intelligence is therefore not the ability to perform according to pre-established or predictable rules (including with adaptation or machine-learning mechanisms on data), but rather the ability to create something new by demonstrating a faculty of sensitivity and adaptation to the socio-cultural context and empathy on an intra and inter-psychological level to the different actors. This involves understanding human and socio-historical nature to be able to judge its own processes and creation, independently.
If we take this second critical and creative approach to intelligence, we should be careful about using the term IA for solutions that “are limited” to adapting according to pre-established mechanisms that cannot generate self-reflexive value judgment or socio-cultural perspective.
Machine learning systems that are labeled AI can be very successful based on very sophisticated models fed with massive data, but they are not “intelligent” in the critical and creative way of humans.
So my phone can learn to recognize the words I dictate by voice, even if I have an accent that it will infer all the more since I use the system. But for all that, to attribute to him a real intelligence in front of my vocal dictation comes from a subjective projection, that is to say from a belief.
Develop critical thinking
We can also question “the intelligence of AI” in connection with the critical thinking that characterizes human intelligence. As part of the #CoCreaTIC project, we define critical thinking as the ability to develop independent critical thinking, which allows the analysis of ideas, knowledge, and processes related to a system of own values and judgments.
It is a responsible thought that is based on criteria and that is sensitive to the context and to others. On the other hand, if we think of algorithmic learning systems, and the politically incorrect results they have produced in the face of images and textual responses that can be cataloged as discriminatory, we must neither fear, condemn, nor accept this result, because it has no moral value. The most likely explanation is that by “learning” data from humans, the mechanism highlights racist and sexist elements, there is no clear value system. Here, this so-called AI has no responsible thinking but exacerbates certain drifts that humans are capable of producing but also limiting and correcting by their criteria and their sensitivity to others.
Here is an attempt to define the critical spirit, itself critical, proposed by national education, including in the form of an educational resource :
In the #Villani Report, critical thinking is mentioned in the face of these technologies both on ethical aspects and in connection with the need to develop “critical thinking” in education on these subjects.
On the other hand, the report highlights the importance of creativity in education as a way of preparing citizens for the challenges of what is made possible with these algorithms. Digital education, particularly in critical, creative, and participatory approaches, can also make it possible to develop a relationship with IT that allows citizens to demystify AI, develop an ethical requirement, and adopt an enlightened attitude (acceptance or not, of what will be used in terms of their personal, social or professional activities).
For these reasons, the development of computational thinking skills is also an asset that complements the development needs of critical and creative thinking in the face of digital.
The lever of computer thought
In 2006, Jeannette Wing called “computer thinking” () the ability to use computer processes to solve problems in any area. Computer thinking is presented by Jeannette Wing as a set of universally applicable attitudes and knowledge, beyond the use of machines.
To develop it, learners (from kindergarten, and of all ages) can combine learning the concepts and IT processes that are the subject of “digital literacy” (object, attribute, method, design manager, etc.) and a creative problem-solving approach using computer concepts and processes (Romero, Lepage and Lille, 2017).
Projects like Class’Code in France or CoCreaTic in Quebec have developed resources and a community to support this approach in which it is not a question of learning “coding” (in the sense of code with computer language) step by step, but to solve problems in a creative and context-sensitive way.
In other words, going beyond coding allows you to anchor yourself in a broader approach to creative programming. This engages learners because it is a critical and creative problem-solving process that uses IT concepts and processes.
It’s not about coding to code or writing lines of code one after the other, but developing an approach to solving complex problems which involve a reflexive and empathetic analysis of the situation, its representation, and the operationalization of a solution that benefits from the metacognitive strategies linked to computer thinking.
The development of a critical and creative approach to digital through computational thinking, allows learners to go beyond the posture of users who could perceive AI as a black box full of mysteries, dangers, or unlimited hopes.
Understanding the challenges of problem analysis in connection with problem situations anchored in specific socio-cultural contexts (for example, migratory issues) is a way of seeing computer science as both a science and a technology which will allow, from the limits and constraints of our modeling of a problem, to try to give answers that will feed on increasingly massive data, without however being able to be considered relevant or of value without the commitment of human judgment.
For an education that allows you to live in the digital age
As the #Villani Report points out, we need more critical and creative education in the face of the emergence of AI.
But we also need a more computer-oriented approach to digital culture so that citizens (small and large) can understand the human factor in modeling and creation of artificial systems, the basic operation of algorithms and machine learning, or the limits of AI in the face of the judgment necessary to consider the value of the solutions produced by the algorithms.
For enlightened citizenship in the digital age, we need to continue to sharpen our critical, creative thinking of collaborative problem-solving while adding a new string to our arc: the development of computer thinking
Margarida Romero, Associate Professor at Laval University, and Director of the LINE Laboratory at the ESPE in Nice at the Côte d’Azur University.