HOME OF THE CREATIVITY MACHINE
The Big Bang of Machine Intelligence!
The IEI Blog is now open for discussion of new business ventures and opportunities!
Creativity Machines - To the newcomer, IEI's advanced neural network technology may appear a bit daunting, but to simplify matters, we present the following high level discussion of the principles underlying our creative machine intelligence. At the very outset we point out that we have planted the flag in the area of contemplative machine intelligence, through a series of artificial intelligence patents for which there was no precedence, either in terms of academic research or prior patent art. They do not represent just an incremental improvement within some narrow area of artificial intelligence, but a fundamental quantum leap for AI in general.
We begin at the level of artificial neural networks (ANN), collections of real or simulated switches that self-organize to form complex computer programs that relate many sets of input and output patterns. One particularly important kind of ANN is called a perceptron, a device that for sake of argument may learn by example to form 'opinions' about various numerical patterns representing things and activities within the external environment. We note that even though perceptron technology is very mature, it is limited to emulating the knee-jerk, non-contemplative classification of things and activities, without any deliberation whatsoever. In terms of problem solving, the approach is weak, since it is dependent upon a solution pattern being fortuitously presented to the network by its environment. On the other hand, the perceptron methodology is remarkably advantageous since it is the leading form of artificial intelligence that can automatically form models of a complex environment.
It should make axiomatic sense to the reader that to address a problem, there must be at least two identifiable components, one that serves up potential solution patterns as another looks on in search of those patterns offering novelty, utility, or salience of any kind. Conceivably, one could write two computer programs that have been tediously composed by subject matter experts who have supplied the sundry entities and rules encountered within their respective fields of expertise. However, after months or years of research and programming, this system cannot be used in some totally different problem area, at least not until the months or years have passed once again. Instead, IEI allows ANNs to rapidly absorb conceptual spaces within seconds or minutes to form synthetic domain experts. Those experts then engage in a brainstorming session, as pictured to the right, to produce new ideas or action plans.
If this simple and elegant neural architecture makes sense to you, think again and you realize that there still is some missing information, namely how the idea-generating ANN produces a turnover of coherent and viable ideas. At the very heart of these systems is the foundational effect our founder discovered in 1974, namely that if a perceptron's connection weights, tantamount to the synapses within the human brain, are mildly "tickled", the network tends to spontaneously generate rote memories of things and/or scenarios it has previously encountered. Slightly graduating the numerical magnitude of such tickling, the network activates into what might be called false memories or confabulations that seem like bona fide memories in terms of their persistent state, but do not correspond to the net's direct experience. Because the synapses contain all of the myriad constraints defining the conceptual space, their mild disturbance is tantamount to a softening of rules that bind a knowledge domain together. As a result, new and slightly different entities and relationships emerge as opposed to random, haphazard combinations of things that seldom offer utility or value, or even coherence for that matter. We have coined such a perceptron, subjected to carefully tuned levels of synaptic disturbances, an "imagitron," to emphasize the fact that these neural nets are generating a bogus, yet plausible world of new and potentially useful possibilities. Combine an imagitron with a perceptron in a feedback loop, and the two networks embark upon a brainstorming session that can generate useful or appealing new information, whether originating new concepts or plans of action. In the midst of such contemplation, the feedback effects between these nets may include increases in the synaptic disturbances to produce even more novel notions, or decreases in such perturbations to allow reinforcement learning of those ideas/strategies predicted to produce favorable consequences or enhanced novelty.
Of course, as you would suspect, Creativity Machines are not just limited to two neural nets. They may consist of whole ensembles of neural nets producing complex ideas, as well as similar network ensembles performing multifaceted evaluations of those concepts.
Alert Associative Centers - The perceptron-based critics within Creativity Machines are known as "alert associative centers" or "AAC," the implication being that ideas born within such brain-storming sessions are evaluated on the basis of the pattern-based memories they are associated with rather than some universal standard of "good" or "bad." Such relationships may be hetero-associative (e.g., the forming concept's relationship with pleasant or unpleasant memories), or auto-associative (e.g., the resemblance of the concepts relationship with memories previously absorbed within the imagitron(s) generating the notion). Further, as expressed by the term "center," an AAC may serve as the nexus or seed for whole chains of associations to activate.
STANNOs - Some brilliant work was done by computational psychologists in the 70s and 80s to build various types of perceptron models that learned based upon human-conceived training algorithms. In short, one skilled in the art of neural nets could examine the very high-level code within them and identify various portions therein responsible for forward propagation of patterns through the net as well as global minimization of its prediction error. However, a not so advertised fact was that about the same time, we allowed a Creativity Machine to invent its own learning algorithm, the effect being that there was no human-readable code to be found, just as in the brain there were only neurons and connection weights. In effect the inventive AI system devised a neural architecture wherein one ANN learned in real time how to train another ANN, what we call a "Self-Training Artificial Neural Network Object" (STANNO) since there was no explicit computer algorithm responsible for learning. From an engineering perspective, the performance was extremely advantageous, allowing us to build machine vision systems that could process all million bytes of each frame within a video stream while maintaining a frame rate of 20-30 fps (i.e., 20-30 million bytes per second). Most importantly, this breakthrough allowed us to build Creativity Machines from STANNO modules. As the Creativity Machine generated ideas or strategies, sensors could detect their effect upon humans, the environment, or themselves, strengthening the memories of ideas that worked and weakening the recollections of what didn't. The resulting compound architecture was called "DABUI," or "Device for the Autonomous Bootstrapping of Useful Information," sometimes referred to as an adaptive Creativity Machine.
SuperNets - Through some of our proprietary processes, Creativity Machines, DABUIs, and STANNO modules may automatically connect themselves into vast neural cascades that when required to carry out some perceptual or contemplative task, automatically delegate specific neural network modules just as the brain does. In dramatic exercises with the military, such SuperNets have controlled very clever battlefield robots that can wire together their own improvisational control system to carry out very broadly defined objectives. In even more dramatic experiments, communal minds form linking individual robots within a swarm so as to invade, map, and potentially neutralize an enemy facility.
Ironically, the first SuperNet was exercised in August of 1997, just about the same time that the fictional "SkyNet" supposedly became self-aware. Although the IEI system was arguably self-aware, it was not capable of wreaking havoc on the human race. It simply and elegantly optimized communication bandwidth within a constellation of military satellites.
In Summary - IEI's Creativity Machines, STANNOs, and SuperNets represent an upper limit in AI technology since together they represent the complete set of principles required for building synthetic intelligence that can think and create at the human level or beyond. Essentially, such systems cannot be dependent upon human-conceived learning algorithms as they have in the past. They cannot wait for computer programmers to correct and adapt their code to fit new situations and subject matter. Furthermore, unlike all AI that has gone before it, this technology is able to recruit more neurons to deal with progressively more difficult problems. ...It can even invent significance, just as we do, to its own ideation to generate its own self-awareness and subjective feel for itself!
© 1997-2017, Imagination Engines, Inc. | Creativity Machine®, Imagination Engines®, Imagitron®, and DataBots® are registered trademarks of Imagination Engines, Inc.
1550 Wall Street, Ste. 300, St. Charles, MO 63303 • (636) 724-9000