Imagination Engines, Inc., Home of the Creativity Machine
Home

HOME OF THE CREATIVITY MACHINE 

The Big Bang of Machine Intelligence!

Imagination Engines, Inc., Home of the Creativity Machine
Imagination Engines, Inc., Home of the Creativity Machine
  • IEI Patent Overview

    The simple, elegant, and inevitable path to human level machine intelligence and beyond, the Creativity Machine Paradigm, US Patent 5,659,666 and all subsequent foreign and divisional filings.

AI's Best Bet
LinkedIn
GooglePlus
Research Gate
AI Showcase Meetup 

The IEI Blog is now open for discussion of new business ventures and opportunities!

 

Highlights of the IEI Intellectual Property Suite

Currently, the IEI patent suite covers five artificial neural network paradigms that are essential for the building of synthetic brains. They are (1) Device for the Autonomous Generation of Useful Information, (2) Non-Algorithmic Neural Networks, (3) Data Scanning, (4) Device Prototyping, and (5) Device for the Autonomous Bootstrapping of Useful Information. Collectively, these extremely fundamental patents position IEI in a unique and exclusive position, to build synthetic brains capable of human level discovery and invention.

1.0 Device for the Autonomous Generation of Useful Information (DAGUI or Creativity Machine).

The first of these patent groups deals with how to stimulate trained neural networks to generate ideas and plans of action that are outside of their direct experience. As you may recall, traditional artificial neural networks absorb only memories and relationships. However, until stimulated with various kinds of unintelligent forms of noise, they perform deterministically, transforming input patterns to output patterns. When stimulated at just the right noise levels, they begin to generate new potential concepts or action plans that are generalized from their training patterns.

Whereas the discovery of just how to adjust the noise level within a trained neural network to produce new ideas is a significant scientific finding, a viable patent was not achieved until a critic algorithm was added, whether heuristic, Bayesian, or neural network based, to monitor for the very best notions emerging from the perturbed network. This is the preferred embodiment of the invention called a Creativity Machine, a "dreaming" network, "imagination engine," or "imagitron" that is monitored by another constantly vigilant algorithm that we appropriately call an 'alert associative center'. So as to accelerate the convergence toward the optimal concepts, this critic is allowed all manner of control over the perturbations applied to the imagination engine.

From a legal perspective, the inventor had reduced to practice an exhaustive list of schemes for perturbing a trained artificial neural network to generate useful information, whether stochastically or systematically driving the inputs of a network or hidden layers of a neural architecture. Furthermore, the patents extend to the construction of compound Creativity Machines consisting of distributed neural cascades containing multiple imagination engines and alert associative centers. Such compound Creativity Machines have demonstrated the ability to carry out juxtapositional invention wherein one imagination engine thinks "wheel," another "axle," and yet another, "box" as a critic network makes the pivotal association with some form of wheeled transportation such as a primitive cart for what humans would think of as that "ah-hah" moment.

Finally, this patent forms the basis for creating conscious machines, in that it emulates the chief cognitive circuit within the brain, the thalamo-cortical or thalamic-cortical loop, wherein the cortex generates a relentless stream of memories and ideas (a.k.a., stream of consciousness), as the thalamus, often called the "eyeball within the brain," is on the lookout for notions that are of interest or value to it (i.e., attentional consciousness or more ominous self-awareness). Essentially, one cannot build a synthetic brain without this essential neural mechanism wherein ideas of any kind, whether they be a Nobel caliber theory, what to say or do next, or the simple interpretation of ambiguous things and activities within the environment (i.e., sense making). Otherwise, the neural system would be non-contemplative, passively waiting and mindlessly reacting to things and scenarios in its external world. Furthermore, the system could not attain consciousness wherein some neural networks automatically form opinions (i.e., the subjective feel) about the noise-induced stream of consciousness within other neural nets. 

2.0 Non-Algorithmic Neural Networks (STANNOs)

When IEI became involved in control systems and robotics in 1996, it quickly became evident that Creativity Machines needed to learn from their own mistakes and successes. What was needed were imagination engines and critics that could cooperatively invent action plans, implement them, and then judge whether they had achieved the intended results. If they had succeeded, reinforcement learning would need to take place in all networks. If they failed in meeting their objectives, at least the critic networks needed to be trained to recognize the negative results while the imagination engine's memory of the not-so-promising notion was weakened.

Realizing that artificial neural networks are customarily trained one at a time, using what is called a training algorithm, typically several pages of C-code, it became exceeding difficult to train two or more neural networks simultaneously within Creativity Machine architectures. We required a neural network that came bundled within its own training algorithm that would be capable of training in situ within Creativity Machine architecture. Ironically, it was a Creativity Machine that autonomously designed a brand new form of neural network architecture that consisted of a "trainee network" intimately intertwined with a "trainer network." Since both algorithms were implicit, taking the form of only numerical connection weights, and not explicit and recognizable algorithms such as back propagation, these networks were termed 'non-algorithmic'. Encapsulating these purely connectionist structures within a class wrapper, we could instantiate ultra-fast and efficient neural network objects to sizes and numbers that were limited only by memory and processor speeds. Using them as the basic building block of Creativity Machines, they enabled a whole new generation of Creativity Machines that could bootstrap from a state of "total ignorance" to highly proficient levels of creative intelligence. These self-improving neural architecture was appropriately called the "Device for the Autonomous Bootstrapping of Useful Information" (DABUI, see below)) and are accordingly vastly more powerful than the previous generation of DAGUIs and capable of ideation within conceptual spaces having billions of attributes, utilizing modest computational platforms such as personal computers.

It is important to note that although these STANNO patents are couched in terms of spreadsheet-based neural networks, that most of the paradigms that form the subject matter of these patents apply to other computer implementations of them as in native C-code or machine language. In fact many of the independent claims discuss these principles regardless of whether they are used in a spreadsheet environment or not. One way to think of this process is that Microsoft Excel became a convenient environment in which to develop and demonstrate these patented concepts.

Advanced STANNOs incorporate many of these patented principles, but also exercise a number of trade secrets to make them even more flexible, fast, and efficient.

3.0 Database Scanning System

A number of new pattern recognition techniques were prototyped using techniques that were, in turn, prototyped using spreadsheet-based development environments. In these patents the term 'database' is used in the general sense of any repository on a computer for storing temporarily or permanently, any kind of data pattern. This terminology would include traditional databases such as Excel or SQL, or storage buffers associated with cameras, or other high speed data acquisition devices.

The most important of the claims within these patents have to do with the use of so-called auto-associative networks that train upon patterns that are somehow interrelated (i.e., multiple camera views of an object, or states of some system hardware that is to be controlled). Once trained upon numerous examples of such genre, they can quickly determine outliers that are non-representative of the group of exemplars previously shown to the network in training. Alternately, by process of elimination, it can also identify patterns that are representative of a genre. 

When such group membership filters are implemented via STANNOs, they can be instantiated on platforms as humble as PCs, and still contain hundreds of millions of inputs, hundreds of millions of outputs, and a significant number of hidden layer nodes. As a result, we can now connect to cameras and perform anomaly detection, target classification, and training using on the order of a million bytes of information on millisecond time scales.

Furthermore, this patent teaches the essential components of what IEI calls 'foveational systems', simple, yet elegant two network systems that allow machines to scan data in a manner similar to the way the eye scans its environment. Essentially an imagination engine generates a series of coordinates to focus its attention on. Other neural networks may then be added to look at whatever the former is focusing on. The feedback mechanism of the Creativity Machine may then be employed to modulate the magnitude of perturbation, that is, chaotically driving the position of the attention window depending upon a critic's 'interest' in that window's content. In this way, just like the saccade movement of the eye, motion is chaotic until it clips a piece of what is being sought, something anomalous, or something of general interest to the system.

4.0 Device Prototyping

Generally, neural network practitioners acknowledge that when building neural networks, one must use so-called activation, or transfer functions that are mathematically well-behaved and can be expressed in some closed, analytic form (i.e., sigmoids and hyperbolic tangents). STANNOs are quite different and break some of these long standing rules, in that they can employ arbitrarily complex activation functions. In fact, STANNOs may in fact use other STANNOs as their individual processing units, resulting in STANNOs of STANNOs, or what we call SuperNets. These are not ordinary neural network cascades, but organic cascades in which all component networks train in parallel, and, while so doing, autonomously connect themselves into vastly complex neural architectures capable of performing brain-like sensing in which 

One extremely valuable task that such SuperNets may perform is what we call 'device prototyping'. To illustrate, consider a collection of neural networks within the hidden layer of such a compound net, each of which has been pre-trained to simulate the behavior of various electronic components. Some may be diode simulations, other capacitor or logic gate models. When the overall system, represented by the SuperNet, is trained upon patterns representing the overall input-output characteristics for the intended electronic device, the STANNO-based simulations will connect themselves into the necessary topology to achieve such function, in the process possibly eroding away connections to device simulations that are unnecessary for device function.

In the same manner, we may allow vast swarms of such STANNOs, and STANNO-based Creativity Machines, to automatically connect themselves into brain-like structures. In this way, very robust machine vision systems may autonomously connect themselves into the equivalent of vision pathways of the brain. Similarly, robotic brains made of STANNOs could spontaneously organize themselves to be capable of devising plans of action that were totally unanticipated by the robot's human creators.

In the language made popular recently by those overlooking these patents, the SuperNet is a hierarchical cascade. However, the cascades these authors speak of are constructed by hand. In stark contrast, SuperNets build themselves.

5.0 Device for the Autonomous Bootstrapping of Useful Information (DABUI)

The fundamental principle of a DAGUI built from adaptive neural network modules such as STANNOs, forms the basis of a whole new generation of Creativity Machines that are capable of learning from a totally "blank slate." To do so, such neural architectures, no longer limited to perceptrons, embark upon successive cycles of generating potential ideas and/or strategies, thereafter implementing such ideas via actuators, displays, speakers, etc. as sensors such as sonar, cameras, keyboards, computer mice, etc. feed back measures of utility or value. The more promising notions are reinforced as memories within the imagitrons, while the memories of other candidate ideas are weakened therein. Simultaneously,  the monitoring perceptrons improve upon their ability to predict the figure of merit of any given notion. Progressively, the imagitron system gains an intuition for those novel patterns that are viable ideas, while the perceptron cascade similarly develops intuition about the performance of any potential idea. The net result, if you will, is that such discovery systems are able to quickly converge toward optimal solutions on multi-billion attribute problems. All other forms of generative AI cannot make this claim, typically failing on as little as 10 attribute problems.

Since the DABUI patents are used extensively in robotic and control problems, wherein sensor and camera suites are used to identify repetitive themes within the external environment (i.e., targets and obstacles), their sensory channels are augmented with hierarchical cascades of group membership filters (GMF) tasked with identifying such items or activities. Independent claims addressing the use of such auto-associative networks are accordingly covered by this patent. Such GMFs are now used within advanced foveational systems that use cumulatively bootstrap their ability to locate a thing or activity of interest.

Finally, DABUIs have the ability to use their sensor suites to spontaneously build navigational fields within either real terrains or complex parameter spaces, so that their ideational modules may conceive paths of least resistance or cost toward some overall objective. Such mechanisms are responsible for intelligent control system that are aware that they must embark upon some form of retrograde progress before attaining their objectives (i.e., moving around the hazard, in contrast to taking a straight line path through the hazard).


New Page 1



© 1997-2017, Imagination Engines, Inc. | Creativity Machine®, Imagination Engines®, Imagitron®, and DataBots® are registered trademarks of Imagination Engines, Inc.

1550 Wall Street, Ste. 300, St. Charles, MO 63303 • (636) 724-9000