Imagination Engines, Inc., Home of the Creativity Machine


The Big Bang of Machine Intelligence!

Imagination Engines, Inc., Home of the Creativity Machine
The simple
  • Three Generations of Creativity Machines

    The simple, elegant, and inevitable path to human level machine intelligence and beyond, the Creativity Machine Paradigm, US Patent 5,659,666 and all subsequent foreign and divisional filings.


Wall Street Journal: Can an AI System Be Given a Patent?

Fast Company: Can a robot be an inventor?

BBC: AI system 'should be recognised as inventor'

Financial Times: Patent agencies challenged to accept AI inventor

Futurism: Scientists are trying to list AI as the inventor on a new patent

The Disruption Lab: The disruption that is DABUS: Beyond AI

ACT-IAC: The dawn of conscious computing

WIRED: This artificial intelligence is designed to be mentally unstable



IEI's Patented Group Membership Filters

Summary - IEI holds the patents on a methodology involving auto-associative nets to rapidly identify patterns that are similar to those the network has previously seen in training. Conversely, this methodology may be used to identify patterns that are non-representative of those already shown to the network. In either usage, we need only show the network representative patterns of what we would like it to detect, without having to expose it to counterexamples. This technique has proven extremely effective in IEI's automated target recognition (ATR), machine vision, and semantic parsing programs. It has also been applied to the company's network intrusion detection, fault detection systems and robotic control applications developed in collaboration with both industry and government.

Details - IEI's patented group membership filter (GMF) is based upon a particular way to use auto-associative networks and are an outgrowth of research conducted by our founder in 1975. Such auto-associative networks have input and output layers having the same numbers of neurons. They learn through cumulative training to reconstruct their input patterns at their output layers.

If such networks are trained upon patterns corresponding to a particular genre, such as the bitmaps of human faces, two important things happen: (1) The weights leading to the hidden layer(s), shown in blue in the diagram to the right, self-organize so as to detect the critical features of that genre, such as eyes, ears, chins, and noses, and (2) The output layer weights, shown in red, self-organize so as to capture the inherent constraint relations that define the genre, such as eyes flanking the nose, just above it, and the mouth being under the nose, etc. 

If such a network is trained upon many bitmaps of faces and then shown a new facial bitmap, the appropriate features are detected at the hidden layer(s) and the necessary constraint relations, here topological in nature, are obeyed. The result is that the input pattern, P, is faithfully reconstructed at the output layer, as the output pattern P'. If, on the other hand, the bitmap of a foot, for example, is applied as pattern P, the critical features are not recognized, nor are the topological constraints obeyed, and the input pattern is not successfully reconstructed. Whether the input pattern represents the genre or group G, absorbed through training into the network, the difference vector, P-P', indicates how representative the input pattern is to what the network has previously learned in training.

Note that in contrast to hetero-associative networks, no counterexamples are needed for training. The network simply learns to recognize similar things to which it has been previously exposed.

Similarly, the GMF may be used as an anomaly detector. For instance, it may be trained upon the normal behavior of some system, such as signals from all the sensors in a chemical reactor during its routine operation. As the GMF watches and learns the fundamental behavior of the system, it forms a comprehensive model of it. As any abnormalities arise in the system being monitored, the difference vector, P-P', rises. The most anomalous of components in this vector quickly indicate which sensors are involved in the anomaly. Thereafter, rule-based systems, or even better, Creativity Machines, may now move the system toward a state that assures the desired process outcome.

If the patented GMF / anomaly detection scheme is now implemented as a Self-Training Artificial Neural Network Object or STANNO, we achieve classification and anomaly detection schemes that may train and classify patterns having millions of inputs and outputs, on commodity processors, on timescales of the order of milliseconds. Furthermore, GMFs may autonomously connect themselves into brain-like structures, or SuperNets, that perform even more complex classification and anomaly detection tasks.

Currently, STANNO-based GMFs and anomaly detectors form the basis of our network intrusion and fraud detection systems, advanced machine vision applications, and creative robots.


New Page 1

© 1997-2020, Imagination Engines, Inc. | Creativity Machine®, Imagination Engines®, Imagitron®, and DataBots® are registered trademarks of Imagination Engines, Inc.

1550 Wall Street, Ste. 300, St. Charles, MO 63303 • (636) 724-9000