In 1995, thanks to several conference papers published by our founder, Scientific American reported on an admittedly controversial phenomenon, namely neural nets becoming creative in the midst of their 'death throes'. That same year, he published more detail in the journal Neural Networks about a related experiment in which he randomly pruned the connection weights from pretrained artificial neural networks whose input neurons were pinned at constant values. The patterns that emerged from the output end of these nets proved remarkably interesting, being largely the output exemplars presented to them during their training (i.e., their memories). In anthropomorphic terms, the nets were forced to ‘look’ at only one thing, but instead perceived a series of alternative objects as their internal architectures degraded. In effect, this experiment provided a first-order model of a range of virtual experiences occurring within the brain, including trauma-induced hallucination and phantom limb effect.
However, occasionally the net would output a novel pattern it had not absorbed during training. In this study, for instance, the net was trained to accept a small input vector of three integer components and produce some associated four-fold pattern (e.g., see figure). In the sequence shown, synaptic pruning ranged between 0 and 30%, with an applied input pattern of (0, 0, 0). For the most part, the net activated into its 4-fold symmetric memories until 20% of its weights had been snipped, at which point it generated a less symmetric object, a defective memory or 'confabulation'. (Note to the discerning reader: not only weights were pruned between neurons, but also those connecting bias units to the hidden layer neurons. Thus, the net originated a succession of patterns even with the input units pinned at values of zero, and the net effectively blindfolded.)
Realizing that such an output pattern could have utility or merit, say as a new character in some alien language, or as a side road sign, such a defective memory qualified as a potential idea. All that was really needed was another algorithm monitoring these output patterns and making such an association. This was the founder’s thinking with his first Creativity Machines in which a generator net, dubbed an “imagitron,” was progressively damaged as another artificial neural net watched for meritorious output patterns offering utility or value. In subsequent versions of the Creativity Machine, transient forms of damage were introduced into the imagitron, taking the form of weight fluctuations that when properly tuned, allowed the net to output largely plausible notions rather than nonsense. Oftentimes, the critic algorithm would automatically adjust the magnitude of such perturbations within the generator so as to optimize the output rate of both plausible and useful information. Then with the introduction of reinforcement learning in 2005, the generator could selectively enrich itself with the most viable notions, that in turn could mutate or hybridize among themselves to create successive generations of highly perfected ideas and/or strategies.
Of course, with the advent of DABUS, generative AI has taken a totally new direction, no longer consisting of two or more brainstorming nets, but vast swarms of nets interconnecting to form geometries encoding ideas, with others offshoot chains expressing the consequences of said ideas. Still, however, the core principle driving the formation of such self-defined ideas is the virtual input effect wherein defective memory chains, rather than faulty neuron activation patterns, form and dissolve due to the intermittent injection and retraction of synaptic perturbations, thus selectively ripening only the most plausible and useful of these notions.
Ultimately, virtual input phenomena contributed to a theory of consciousness in which ongoing synaptic perturbations in the brain produce a stream of memories and ideas (i.e., stream of consciousness) as monitoring nets seize upon those notions offering existential advantage to the host organism.