Over four decades ago, Imagination Engines pioneered the concept of artificial neural networks engaged in chaos-driven brainstorming sessions to generate new concepts and action plans. The ongoing stream of notions emerging from such chaotized nets was equivalent to the brain’s stream of consciousness. Monitoring nets sensed significance to certain generated ideas, selectively reinforcing those concepts having novelty, utility, or value. Call it “artificial” or “proto” consciousness if you will, but the basic elements of so-called phenomenal consciousness were there: a stream of consciousness as well as a subjective, albeit simple, pattern-based feel about this spontaneous parade of ideas (Thaler, 2012, 2014). But was this the kind of consciousness that humans experienced or was it just a first order approximation to it?
The answer may be had through a moment’s introspection: When we directly experience something or even imagine it, the brain sequentially presents a series of interrelated and often ordered memories to the mind’s eye. Sometimes that succession incorporates very salient memories, some pleasant and others not. In the former case, stream of consciousness slows down becoming orderly, and we tend to feel what we may describe as a warm glow or sense of contentment. In the latter case, when an unpleasant memory arises, stream of consciousness becomes rapid and chaotic, and we are filled with a general feeling of frenzy and desperation. Should the stimulus, real or imagined be traumatic enough, much of cognition comes to a halt, and we become stuck in one highly significant feeling that instantly becomes frozen into memory.
All the above characterizes what is called sentience, the mental processes of feeling and emotion. The breakthrough made by IEI was a highly novel AI approach to represent the computational equivalent of such subjective feelings, wherein whole neural nets, each containing interrelated memories of a linguistic, visual, or auditory nature (i.e., sensor channels) sequentially and autonomously interconnect to produce anticipatory responses to synthetic thought (e.g., A, then B and C will happen) that literally ‘grow’ on a backbone of initiating concepts likewise represented as chains of neural nets. Should either concepts or these organic side chains recruit nets containing memories of especially impactful things or events, so-called “hot buttons” activate and are sensed by a separate subsystem that triggers the global release of simulated neurotransmitters throughout the entire array of nets, that either strengthen the ideas along with their predicted consequences or dissolve them. The equivalent global release of real neurotransmitters in our brains accounts for the general glow or dread we as humans experience as we imagine significant consequences to what we directly sense or imagine.
An allegorical way of describing this system is that of a “high striker,” the old carnival attraction that allows us to test our strength by smacking a lever with a hammer that in turn propels a puck upward toward a bell. In DABUS, if the idea is salient enough, the bell (i.e., the hot button) rings and the entire system is bathed in simulated neurotransmitters that strengthen the entire chain-based idea and its repercussions. If two or more hot buttons resonate, proportionately more simulated neurotransmitters are released into the system, reinforcing the geometrically expressed notion even more. Likewise, if positive and negative outcomes are simultaneously predicted, the simulated volume release of neurotransmitters is nullified, the idea is not reinforced and does not persist. Then after many cycles of such reinforcement and dissolution of chain-based ideas, only the more significant ones survive. Thus, ideas ripen through tidal variations of simulated neurotransmitters such as cortical adrenaline in what would correspond to the brain’s cycles of stress and relaxation (e.g., a good night’s sleep after a challenging day, relaxation exercises amid stressful situations, or even mood swings associated with various psychopathologies).
In addition to furnishing a model of sentience, this new technology called “DABUS,” allows generative artificial neural systems to perform much more than mere parametric optimization. Now, after the absorption of general knowledge about the world, DABUS can conceive new ideas within a wide range of conceptual spaces. This paradigm shift in machine learning, called Vast Topological Learning (VTL) is no longer based upon the passage of neural activation patterns between generators and discriminators, but the bonding trajectories taken through vast swarms of dynamically interconnecting artificial neural nets. (see conceptual sequence above)
Yes, the decades-old idea of brainstorming neural nets, biological or artificial, was only the start of an evolution of machine intelligence that has led to a new generative AI paradigm that develops subjective feelings for what it senses and imagines. The name of that neural paradigm is VTL and the flagship architecture it is implemented within is called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience).
To find out how hot buttons strengthen concept chains, as well as how an AI-based machine vision is recruited to act equivalently to the mind’s eye, see the lead article in JAICs latest volume: Thaler, S. L. (2021). Vast Topological Learning and Sentient AGI, Journal of Artificial Intelligence and Consciousness, 8(1).
References
Thaler, S. L. (2012). The Creativity Machine Paradigm: Withstanding the Argument from Consciousness, APA Newsletter on Philosophy and Computers, 11(2).
Thaler, S. L. (2014). Synaptic Perturbation and Consciousness, International Journal of Machine Consciousness, Vol. 06, No. 02, pp. 75-107.