Saturday, August 20, 2011

Indistinguishable From Magic Part 1


IBM has recently sent out a press release (http://www-03.ibm.com/press/us/en/pressrelease/35251.wss) about 2 cognitive computing chips they’ve designed. These chips represent a drastic departure from traditional computational architecture and programming; they also represent a more extensive approximation of the level of interconnectivity displayed in the mammalian brain.

…from the press release: “The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1….. While they contain no biological elements, IBM’s first cognitive computing prototype chips use digital silicon circuits inspired by neurobiology to make up what is referred to as a “neurosynaptic core” with integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons)….. IBM’s overarching cognitive computing architecture is an on-chip network of light-weight cores, creating a single integrated system of hardware and software. This architecture represents a critical shift away from traditional von Neumann computing to a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.”

What is not clear from the press release, but will hopefully be revealed soon, is if the developed chips are running all functions and control operations or subsets. The release states: “IBM has two working prototype designs. Both cores were fabricated in 45 nm SOI-CMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification.” Under current understanding biological cognition can be generalized with the “form follows function” maxim. But in mammals, the cognitive processes that input visual signals do not control (other than simple reflex actions) the majority of the other biological functions of which an animal is capable. It is difficult to envision IBM capable of integrating the information input, decision-making, and motion control processes on a chip design with so few connections.

Subset processing is still a remarkable achievement, but does not equal the real-world environmental applications hypothesized as possible in the press release. It is my contention that a true cognitive computing system needs to meet 3 minimum criterion: 1)Be able to intake system-relevant input; 2)Be able to process that input and make rational decisions that; 3)qualitatively or quantitatively effect the system-relevant input it receives. A simple example of this would be a light sensor that measures the intensity of a particular wavelengths of light, the initiates a shutter to close-off the sensor when the light is intense enough to damage the sensor circuitry.

I believe part of the difficulty in developing a complex, real-world capable artificial cognitive system will ultimately be how to create synergistic subset cognitive systems. To bring that example to a mammal analogue; mammals can narrow the iris, close the eye, turn the head, or even move away from the light source. Each of those actions progressively uses more and more cognitive subsets, and the latter two actions have little direct linkage to the initial sensor (i.e. the eye) that received the information.

Part 2 of this post will delve more into the psychological, philosophical, and moral-ethical ramifications of this type of artificial programming.

No comments: