Symbolic Processing

"Symbolic Processing" is a pejorative term used by non- or sub-symbolic practitioners of Artificial Intelligence. It refers to any attempt to create AI using conventional programming language means or at a high level. Symbols are kind of like variables, they can refer to one thing at one moment and another at another. Symbolic processing tends to have variables that hold concepts like 'green, 'red, or #true. The concept of symbolic AI itself only makes sense by being compared to non-symbolic AI. Non-symbolic AI uses numbers to describe statistical regularities and pretends that's not symbolic.

The symbolic/non-symbolic division was really exciting in the late 1980s and early 1990s. Besides the Neural Networks types referred to on the Artificial Intelligence page, there were the behavior-based roboticists, the genetic algorithm folks (though note genetic programming is actually usually symbolic) and so forth. There's honestly still a division, but better defined as the neats (who use formal systems & don't mind talking about symbols) vs. the scruffies (who just want to make AI work --- hackers in the positive sense of the term) Serious neural network people have devolved into mathematics, and especially Bayesian statistics (see the Neural Information Processing Systems (NIPS) proceedings, which are all online). In 1996 Wolpert's No Free Lunch Theorem told us that no one can actually learn using neural networks or anything else without doing some programming first, which is of course (you guessed it) symbolic. It takes information to make information. The universe is too large to get a handle on with no initial clues, at least in any reasonable amount of time. It should be said for all this that the scruffies & the machine learning types still do the most exciting AI in my book.

Credit where it's due: the term symbolic processing has its roots in Alan Newell and Herbert Simon's Physical Symbol System Hypothesis from 1963 (the paper describing their General Problem Solver). This hypothesis says roughly that intelligence is isomorphic with a physical system that manipulates symbols. So if you are wondering if something is intelligent, all you have to do is open it up and see whether a) something inside of it is representing information in the external world and b) something else inside of it can manipulate that representation.

So yes, by this definition, AI has been achieved long ago. Now its just a matter of making it more intelligent in ways that are interesting.

See original on c2.com