What Is AI? Exploring the Boundaries of Artificial Intelligence and Human Cognition

I. What is AI?

AI stands for Artificial Intelligence and refers to a machine’s ability to think like human beings.

The only way to understand such a complex system as the brain is by chunking it on higher and higher levels, and thereby losing some precision at each step. What emerges at the top level is the “informal system” which obeys so many rules of such complexity that we do not even have the vocabulary to think about it. And that is what Artificial Intelligence research is hoping to find.

II. What AI Is Not

A. Neural Networks and AI

While studying for my master’s degree, I was particularly fascinated by Neural Networks and how they reduced, or more precisely attempted to reduce (unsuccessfully), the complex machinery of the brain to the simple, mathematical, precise, and elegant conceptual model of a McCulloch-Pitts neuron. The fact that neural nets could be trained to solve any problem was fascinating.

Later, I spent a summer working on a neural network application for face detection. While it had not achieved spectacular success (that was two decades ago), it provided genuine insights into the nature of intelligence and why AI still had a long way to go. I wrote the algorithms myself in Java. Seeing the weights converge and the cost function minimized was exhilarating. However, the more I knew about the inner workings of the algorithm, the less magical it seemed. At some point, I became familiar with the technical details of the back-propagation and gradient-descent algorithms, and the charisma and aura of neural nets disappeared.

B. What Is Intelligence?

When engineers put together an application that played chess, they understood that intelligence was not the ability to mechanically scan hundreds or thousands of different moves to select the best. The same happened with object detection or robot control applications. With every new milestone, the goal seemed to move away a bit further.

In this sense, while it is much harder to define intelligence precisely, we can easily list a set of problems that we once thought required intelligent beings to solve but now know could be solved with regression models, neural networks, or large language models.

III. Can Machines Think?

Turing even offered a prediction: “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

A. Thinking vs Computing

Machines (or computers) can compute, i.e. run calculations, really fast. For example, a pocket calculator can divide two numbers and supply an answer with a precision of ten decimal places. A personal computer can provide twenty, hundred or many thousand decimal places in a fraction of a second.

However, most will agree that dividing two numbers is hardly intelligent; the method (or algorithm) exists and is simple to implement in any programming language. Simple but fast calculations, which are what machines do exceptionally well, are not intelligence.

B. Specialized Problem-Solving

But machines can do more than arithmetic calculations. Technological breakthroughs in AI produced software that can beat chess grandmasters (Deep Blue), identify faces, classify objects, recommend books and movies, and, more recently, provide answers in natural language to almost any question with a high degree of accuracy (LLM and GPT).

Deep Blue was a chess-playing supercomputer developed by IBM. It became famous in 1997 for defeating the reigning world chess champion, Garry Kasparov, in a six-game match.

This victory marked a significant milestone in artificial intelligence, demonstrating the potential for computers to surpass human abilities in complex cognitive tasks. Deep Blue was explicitly designed for chess and could evaluate millions of possible chess positions per second. Its ability to calculate and analyze moves quickly gave it a significant advantage over human opponents.

IV. Is the Human Mind a Machine?

A. Two Systems With a Gap in Between

One remarkable fact about human minds is that we still don’t understand how they work. While we have precise knowledge of the underlying physical processes governing atoms, molecules, and brain cells, we know very little about how consciousness, memory, logical reasoning, and emotions emerge from those processes.

On the lowest level, we have a “hardware” system where everything works according to the laws of physics. This hardware level is perfectly understood and predictable. At the top “software” level, we have creativity, innovation, and irrational behaviour.

Ideally, you want the software to be decoupled from the hardware so that the latter can be replaced without loss of functionality.

If the software (human intelligence) can only arise if that particular hardware is used (the brain), then machines can never think, at least not like humans.

B. Behaviourism

Before the 1950s, psychology was guided by the following rule: the mind is a black box, and the only way to understand human behaviour is by observing the actions and reactions of a human being. This framework was called Behaviourism.

Once the universal machine (or computer) became common knowledge, psychology experts reformulated their methods by casting the mind not as a black box anymore but as a data processing machine. Stimuli from the sensory organs capture data about the environment, which is then processed by the brain’s internal machinery.

Is the mind a data processing engine that a supercomputer could easily imitate? This is still a raging debate in the philosophy community. An unproven hypothesis (Tarski-Church-Turing Hypothesis) says it is, while experience and closer examination pose challenges. We explore both of these next.

V. Tarski-Church-Turing Hypothesis

The Tarski-Church-Turing Hypothesis is a speculative, interdisciplinary concept that ties together ideas from logic, philosophy of language, and computer science. Although it is not a formal hypothesis in the sense of a well-defined scientific or mathematical conjecture, it represents a convergence of thoughts from three prominent 20th-century intellectuals: Alfred Tarski, Alonzo Church, and Alan Turing.

The Tarski-Church-Turing Hypothesis suggests that any adequately formalized notion of mathematical truth or mechanical computation would ultimately be equivalent across different formal systems, such as logic (Tarski), computability (Church), and algorithmic procedures (Turing).

A. Origins and Key Concepts

1. Tarski’s Semantic Theory of Truth

Alfred Tarski developed a formal definition of truth based on the idea that a sentence is true if it corresponds to the facts of the world (i.e., in a model or interpretation). Tarski’s work formalized the notion of truth in formal languages, where a formula could be considered “true” under a specific interpretation if it matches the structure of the model. His work primarily deals with first-order logic and model theory.

2. Church’s Lambda Calculus and Church-Turing Thesis

Alonzo Church introduced lambda calculus as a formal system to study computation and functions. His Church-Turing Thesis, co-developed with Turing, postulates that the notion of computability as captured by different formal systems like lambda calculus, Turing machines, and recursive functions are equivalent, meaning any function that can be computed by one of these systems can be calculated by the others.

3. Turing’s Formalization of Computation

Alan Turing formalized the concept of computation with his abstract Turing Machine, a simple model that manipulates symbols on a tape according to a set of rules. The Turing machine became the standard model for what it means for a process to be “computable.”

B. Hypothesis’ Essence

Tarski-Church-Turing Hypothesis

At its core, the Tarski-Church-Turing Hypothesis brings together these concepts under the speculative claim that:

  • Truth, in the sense of formal languages as defined by Tarski,
  • Effective computability, as defined by Church and Turing, and
  • Mathematical reasoning are fundamentally interrelated.

This would imply that formal truth (Tarski), logical definability (Church), and algorithmic computability (Turing) might describe the same underlying phenomenon. If true, it would suggest that any meaningful statement in a formal language that can be recognized as true (Tarski) can also, in principle, be effectively computed or verified by a mechanical procedure (Church-Turing).

C. Philosophical Implications

The hypothesis, though not rigorously proven, has profound implications for both the philosophy of mathematics and the nature of artificial intelligence. For instance:

1

It raises questions about the nature of mathematical truth and whether such truth can always be captured algorithmically.

2

It explores the relationship between human cognitive processes and machine computation, especially in contexts where logic, language, and computation intersect.

In essence, the Tarski-Church-Turing Hypothesis operates as a speculative bridge between logic, mathematics, and computation, suggesting that these realms might not be as distinct as traditionally thought. However, it remains largely a philosophical conjecture and has not been formally established as a rigorous theorem.

VI. Human vs Machine Intelligence

What is it that we intuitively understand as intelligent behaviour and that which machines cannot (at least as yet) do? Here are some examples.

A. Abstraction

Abstraction is the ability to separate the valuable from the valueless, the general from the specific, the signal from the noise, and the ideal from the practical.

For example, if we look at a large sample of plants randomly picked from a garden, at first, it might seem like they have nothing in common. Upon closer inspection, we found they all had petals of different colours, and we concluded that these were all flowers. From the countless variations of plants of various shapes, sizes, colours, and ages, we were able to create a symbol of the perfect flower, an idealized model that we can manipulate in our minds but to which no real flower would ever be identical.

But we can do more than identify one abstract class of plants, in this case, “flowers”. We could have equally categorized them according to their light-gathering capabilities, and now we have another abstraction that is as real and valuable as the first. Abstraction is a task that human beings excel at intuitively.

B. Hierarchies of Objectives

Imagine a room with some furniture and a faulty light bulb that needs to be replaced. A robot tasked with replacing the bulb would search the room for something to stand on. It might be able to detect a chair, a table, and a sofa, and after some deliberation, it might select the table as the sturdiest and, therefore, safest option. But it might not be known that moving the table might shake the fishbowl that’s sitting on it, potentially causing it to roll over, spill the water, and cause the goldfish to perish.

This constant appraisal of the situation and reprioritization of objectives is not something that machines are great at doing. A robot trying to decide between changing the bulb or keeping the goldfish alive might remain stuck in this situation indefinitely.

C. The Halting Problem

You are on holiday on some remote island (the historical example is Crete), and a local Cretan offers you the following advice: “All Cretans are liars, including this one.” After some analysis, you find that this statement can neither be proven true nor false with any logical train of thought. The only way to prove this is empirically by trying to find at least one Cretan who has told at least one false statement. Empirically proving that all Cretans are not liars is infinitely more difficult.

Given a set of formal rules that a machine can use to decide if a statement is true or false, an undecidable statement such as the example above can always be found within that system. Intuitively, human beings are able to halt their analysis after examining such logical statements for some time. Because machines run algorithms and no algorithm exists for this specific scenario, they would have a hard time knowing when to stop “thinking”.

D. Levels of Meaning

An abstract concept such as “flower” does not necessarily mean the same thing in all contexts for all individuals. For example, a “flower” may designate a class of plants with specific attributes, but it can also represent (perhaps in the poet’s mind) youth, life, nature, or beauty. As such, a complex reality may be interpreted in an infinite variety of ways depending on the observer, their state of mind, and the situation they are in.

When meaning depends on the context and the interpreter, it becomes hard to codify into symbols (such as objects or parameters in an algorithm) and analyze.

E. Conclusion

For every class of problems above, abstraction, hierarchy of objectives, and halting decision, a program can always be written so that it works in a particular scenario. For example, you can always assign the robot two objectives that it must simultaneously satisfy: keeping the goldfish alive and changing the lightbulb. You can always place a limit on recursive functions so that the halting problem vanishes. In any of these scenarios, the solution is specific to the issue at hand (and therefore cannot generalize easily) and is much less impressive than what human minds can do.

VI. Intelligence Tests for Machines

The Turing Test is a concept in artificial intelligence introduced by British mathematician and logician Alan Turing in his 1950 paper “Computing Machinery and Intelligence”. The test was designed to determine whether a machine could exhibit intelligent behaviour indistinguishable from that of a human. It addresses the question Turing posed: “Can machines think?”

A. Structure of the Turing Test

The test involves three participants:

  • Human Interrogator: The judge, who asks questions to two hidden participants.
  • Human Participant: A person answering questions.
  • Machine Participant: A computer program that tries to simulate human-like responses.

The interrogator is tasked with determining which of the two respondents is human and which is machine based solely on their text-based responses. If the interrogator cannot reliably distinguish the machine from the human, it is considered to have passed the Turing Test.

B. Key Aspects

  • Imitation Game
  • Turing’s original formulation involved an “imitation game” in which the machine and the human attempted to convince the interrogator of their humanity.
  • Focus on Behavior
  • The test doesn’t concern itself with whether a machine has consciousness or actual understanding. Instead, it evaluates whether the machine can convincingly mimic human conversation.

C. Criticism and Variations

Philosophers and AI researchers have debated whether passing the Turing Test is a sufficient measure of “intelligence.” One famous critique is John Searle’s Chinese Room argument, which suggests that a machine could pass the test by manipulating symbols without any understanding of them.

D. Importance

The Turing Test remains a key conceptual benchmark in discussions about artificial intelligence. While modern AI research focuses on various forms of intelligence (such as reasoning, learning, and perception), the Turing Test is still used as a way to explore the boundaries between human cognition and machine behaviour.

VII. Why Is AI Important?

The world seems to be entering a new age, the age of AI. We can see intelligent algorithms in marketing (recommendation tools in online shopping websites and social media), health (skin cancer detection), automotive (self-driving cars), and banking (credit risk scoring, fraud detection), just to name a few.

A. Risks and Benefits of AI

The risks and benefits of AI are not yet fully understood. This might seem an exaggeration, given how much we know about the technology and science of artificial intelligence, but here are some reasons why this might not be the case.

1

AI Literacy

Not many people understand artificial intelligence enough to make an educated decision about whether to use it and whether it’s good for them. Domain illiteracy in AI is high, and this is not surprising given the advanced technological concepts involved.

2

Artificial Intelligence as a Tool

Artificial intelligence is a powerful tool, but like many similar tools, it does not come with safety instructions or harm-prevention mechanisms; it can be used for good or evil.

B. AI and Complex Social Groups

Because of the way a complex system such as a society (or any social organisation) works, the causal relationships between events cannot be fully separated, analyzed, and understood. Instead, a complex web of relationships controls and directs the system’s evolution. This is particularly interesting when AI is injected into this network, giving rise to powerful feedback loops that are hard to follow and whose consequences are difficult to predict.

VIII. Conclusion

With every discovery or feat of engineering in artificial intelligence, two things seem to happen. First, we are utterly amazed at what technology can offer. Second, once the concepts behind the discoveries become widely understood, a state of disillusionment descends, and we have a better idea of what human intelligence is not rather than what it is. We hope that this article has given the reader some idea of what AI is and why it is (at least not yet) human intelligence.

IX. References

One Comment

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Leave a Reply to Grant Castillou Cancel reply

Your email address will not be published. Required fields are marked *