Analogy Machines or Reasoning Engines? →
The foundations for reasoning about computer algorithms predate the development of computer programming languages themselves. In fact, logic itself was developed in order to make arguments and discourse in natural language in a rigorous and structured manner so as to be able to reason about them. The word logic itself originates from the Greek word logos which could be loosely translated to reason, discourse or language and it involves the study of rules of thought or rules of reasoning. This allowed to deduce conclusions from a set of premises in a logical and structured way, removing many of the ambiguities. For a long time however, logic (in particular mathematical logic) was used to advance the field of programming languages - in particular to reason about the correctness of algorithms (erstwhile AI - symbolic AI or symbolic reasoning) given the various data structures provided by the different programming languages. When you think about it the tool (mathematical logic) that was developed to help in reasoning about natural languages made way to develop tools for more constrained language, with strict syntax and mostly strict semantics. In the last decade this has turned around and advances in algorithms, data and computation have enabled building systems that could understand natural language and attempt to reason about them.
We see two approaches to automated artificial intelligence:
-
Reasoning Engines: Based on rules or mathematical logic that allows one to reason about (give mathematical proofs) following rigorously established rules of mathematics and semantics of languages. This was the erstwhile AI that had several winters from 1950s to 1980s as data was expensive (there were no smartphone cameras) and computation was very expensive (there were no GPU data-centers).
-
Analogy Machines: Based on patterns seen in large amounts of similar data. The rise of inexpensive sensors (cameras) and proliferation of computing (smartphones) and internet enabled vast amount of data to pattern match (i.e. learn) from.
Is reasoning the core of intelligence as we usually think? Or would analogy machines turn out to be the Artificial General Intelligence (AGI) that everyone is racing for? Or would it be a powerful system that is able to automate several tasks and develop new impactful discoveries for mankind - like Halicin, but still not able to deduce everything that human minds are capable of? There is an intense ongoing debate between Computer Scientists, Philosophers, Neuroscientists, medical doctors, religious leaders and of course the general public about if the magnificient advancements in analogy machines, henceforth Deep Learning, one day lead to true AGI and what would positive and negative impacts of such a system be.
In a recent interview, Prof. Geoffrey Hinton a Turing award winning Computer Scientist claims that reasoning may not be the core of intelligence and that there are things similar to vectors in the brain that are just patterns of activities that interact with one another. This is similar to word embeddings in large language models (LLMs). There is a fantastic study about holographic memory by Prof. Karl Lashley who experimented with rats’ behavior in mazes and concluded that all information is stored everywhere in a distributed fashion and even if one part of the memory gets damaged one has access to all of the information.
Prof. Hinton contrasted symbolic reasoning and analogical reasoning and claimed that reasoning is easy when done using symbolic reasoning but perception is hard. It is hard to encode what we perceive into logical statements without introducing ambiguity. He claims that the reasoning employed by human beings is almost always analogical reasoning - reasoning by making analogies. Languages themselves are abstractions - they are just an approximate encoding for communicating our thoughts and not the actual representation of our thoughts. Hence different human beings might convey the same thought in different ways. The languages themselves as employed by us are just pointers to the vectors in our brain (similar to the weights in LLMs that point to embedding vectors).
It is quite likely we will get some kind of symbiosis. AI will make us far more competent. It is changing our view of what we are … it is changing people’s view from the idea that the essence of a person is a deliberate reasoning machine; the essence is much more a huge analogy machine … that seems far more like our real nature, than reasoning machines – Geoffrey Hinton
There used to be research interest in identifying which language constructs would make the best compiler or the best programming language and Computer scientists have looked at languages like Sanskrit to and deconstruct Panini’s rules.
However, languages themselves are not the same as are human thoughts and expressions of thoughts. Some languages have ways of expressing things that certain other languages do not. Some people have ways of articulating their thoughts that many others do not. Given this large variance, I am waiting to see how long before LLMs can become a more effective teacher or a more effective comedian.
I am also waiting to see what would be the best language for a machine to express thoughts would be - would it even be a textual language or would it be a visual one? Conversely what would be the best mode to communicate with an analogy machine be? How would you reason about what analogies it used to arrive at a prediction and in what sequence or order it used the analogies - much like a deductive reasoning engine for programming languages.