github twitter linkedin email
Metamagical Themas — Waking Up from the boolean Dream, or, Subcognition as Computation
Mar 22, 2023
5 minutes read

Metamagical Themas has been on my reading list for a while. It recently resurfaced on a Twitter thread discussing the capabilities of modern LLMs [& arguing that they generate original content instead of “balderdash”]. In particular, chapter 26 - “Waking Up from the Boolean Dream, or, Subcognition as Computation”.
Intrigued by this claim, I decided to read the entire chapter to get some more context. The result of my reading is this blog post, which is similar in style to my Voidpaper newsletter where I publish notes on other people’s research - the ideas are not my original ideas.

What struck me as particularly fascinating was that as early as 1985 researchers were already thinking about AI - and not just the functional parts - but also the parts we might consider more trivial, The philosophical part of it. I imagine that in a couple of decades, the works of Eliezer Yudkowsky and other members of the LessWrong/rationalist community on AI risks will become fundamental and vital to how we think about intelligent agents.

John Searle’s Chinese room experiment

This experiment was presented by John Searles in his paper “Minds, Brains, and Programs” published in 1980. The experiment was to challenge the idea that a computer program can truly understand language or have consciousness(semantics which is distinguished from mere syntax). Searle imagines a scenario in which a person who does not understand Chinese is placed inside a room with a set of Chinese characters and a set of rules for manipulating those characters. The person in the room receives Chinese characters as input and uses the rules to produce other Chinese characters as output. From the outside, it appears as though the person in the room understands Chinese, but Searle argues that the person does not actually understand Chinese, but is simply following a set of syntactic rules without any semantic understanding. Searle concludes that a computer program, no matter how sophisticated, can never truly understand language or have consciousness since it is only manipulating symbols according to rules without any true understanding of what those symbols mean. There are several -very interesting- objections to Searle’s Argument. These are some of the more popular ones(The systems reply, The Robot Reply, The Brain Simulator Reply, The Combination Reply, The Other Minds Reply, and The Many Mansions Reply)

The author of this article aims to explore the meanings of terms such as “information processing” and “cognition as computation” used in the field of artificial intelligence and shed light on the views of Searle, Newell, and Simon. The article was triggered by Avron Barr’s paper “Artificial Intelligence: Cognition as Computation” but can be read independently of it.

The Problem of Letterforms: A Test Case for AI

  • The central problem of intelligence is to understand the fluid nature of mental categories, The invariant cores of percepts such as your mother’s face and the strangely flexible yet strong boundaries of concepts such as “chair” or the letters ‘a’ and ‘i’.
  • For any program to handle letterforms with the flexibility that human beings do, it would have to possess full-scale general intelligence.
  • Specialized domains tend to obscure, rather than clarify, the distinction between the strengths and weaknesses of a program, making letterforms a better test case for pattern recognition.
  • Each letter of the alphabet comes in thousands of different official versions, not to mention unofficial ones, which raises the question of how they are all alike. the goal of an AI project would be to give an exact answer [in computational terms].
  • The problem of letterforms is intimately connected with the problems of stylistic consistency and the relation of each letter to the abstract notion of its shape.
  • Optical character readers like those invented by Ray Kurzweil for blind people use template matching, which is inadequate for the problem of recognizing letters.

The Human Mind and Its Ability to Recognize and Reproduce Forms

  • Recognizing letters of the alphabet is a deep problem, requiring a program to understand the embodiment of the letter’s shape and carry that style consistently across all letters.
  • Most AI work on vision focuses on recognizing textures and mediating between two and three dimensions, but humans struggle to draw simple objects and reproduce characters from memory.
  • Despite the brain’s fantastic recognition abilities, it struggles with rendition, making it difficult to understand the complex processes involved in accepting things as members of categories and perceiving how they are members of those categories.
  • In his book Pattern Recognition, Mikhail Bongard, a Russian computer scientist, concludes with a series of 100 puzzles for a visual pattern recognizer, whether human, machine, or alien, and to. In other words, he works his way up to letterforms as being at the pinnacle of visual_ recognition ability.
  • Letterforms are at the pinnacle of visual recognition ability, but no pattern recognition program can do the Bongard problems involving letterforms.

Not Cognition, But Subcognition, Is Computational

  • The confusion of levels in the title “Cognition as Computation” leads to the question of whether thinking is really computing.
  • Neurons may execute analogue sums, but this does not imply that epiphenomena or ideas are performing arithmetic, and they cannot be understood in terms of computer science.
  • Some AI researchers believe that the ultimate solution to AI lies in getting better theorem-proving mechanisms, developing efficient ways of searching a vast space of possibilities or making a complex language for pattern matching, backtracking, inheritance, planning, or reflective logic.
  • These techniques may solve specialized logical problems, but they are not good for recognizing faces or drawing something novel and pleasing.
  • The missing link between perception and cognition is the subcognition-cognition gap, the gap between the sub-100-millisecond world and the super-100-millisecond world.
  • The brain’s neural substrate is relevant to AI, and any AI model eventually has to converge to brainlike hardware or architecture that is isomorphic to brain architecture at some level of abstraction.
  • The level at which the isomorphism must apply is considerably lower than what most AI researchers believe.

Further Reading


Back to posts