HOME OF THE CREATIVITY MACHINE
The Big Bang of Machine Intelligence!
Creative Control Systems for Robots
Summary - IEI builds a unique form of neurocontrol system that enables robots to ad-lib tactics and strategies that vastly exceed their baseline experience. Fully capable of autonomously learning from their own mistakes and successes, our revolutionary neural network architectures allow complex robots to learn critical behaviors completely from scratch. In a matter of seconds or minutes, the equivalent of 'cybernetic road kill' can devise complex movement strategies, recover from various mishaps, or accomplish broadly defined missions. The same neural architecture can recruit and interconnect with other neural network modules so as to build vast, brain-like neural structures into extremely complex perceptual systems and improvisational actuator circuitry.
For instance, a complex hexapod robot, utilizing a leg system having 18 servos,
exploits its onboard sonar to judge its forward progress as its Creativity
Machine based control system experiments with itself and cumulatively learns how
to efficiently walk the insectoid robot. Once trained on this baseline behavior,
the same Creativity Machine architecture can then instantaneously invent the
necessary derivative behaviors (i.e., backward, right turn, left turn, and
crab-like sidle motion) on demand. The same robot may then enlist the Creativity
Machine to automatically connect and coordinate sensors with actuators through a
cascade of neural network modules that we call a "Supernet."
Within such a compound neural architecture, certain neural modules specialize at
generating navigation fields as other such modules automatically interconnect
themselves into subsidiary Creativity Machines that then study this attractor
landscape to plot a path of least resistance through it. Once this synthetic
central nervous system has built itself, the host robot contemplates its
environment by moving its camera/sensor stalk to study its surroundings, fully
appreciating that it may have to commit to a retrograde rather than a
straightforward trajectory to ultimately close with its intended target.
More than a decade ago, IEI
pioneered the necessary methodologies to combine our virtual and real world
robotics efforts so that systems like the hexapod robot could rehearse their
behaviors in the equivalent of a dream state. Using this approach, robots would
first learn fundamental behaviors, such as walking, in the equivalent of
physical, waking reality. In their virtual dream state, the robots would then
bootstrap these basic functions into more sophisticated ones. Immersed in its
own game world, similar to its intended mission environment, the robot would
then learn to deal with unexpected scenarios generated by yet another Creativity
Machine. Similarly, swarms of such robots could bootstrap cooperative behaviors
via TCP/IP channels, allowing them to efficiently map out the floor plan of a
building, for instance, using a communal, neural network based memory.
Thereafter, the physical robots were totally prepped for action within their
intended mission environments.
As an additional example of such a "dream-learning" process, we mention an exemplary exercise we conducted with both the US Air Force and NASA, in which an 1800 pound air sled, levitated on a cushion of air, autonomously rendezvoused and docked with its intended target. In just a few minutes, the IEI creative control system has knit together a collection of separate STANNO modules into an effective perceptual system that integrates the multimillion byte video stream from a simple Logitech camera with outputs from both an accelerometer and gyroscope. The perceptual system then interconnected itself with a self-bootstrapping Creativity Machine that during its off cycle dreams in virtual reality, refined its strategies for seeking and docking with its target, over just a few minutes learning how to effectively coordinate the firing of 18 digital air thrusters positioned around the sled's periphery. What you see in the video (click the image to right), is the end result, a control system for autonomous rendezvous and docking that circumvents the need for otherwise costly laser guidance components.
To those familiar with the
principles of connectionism and neural networks, what all of this says is that
the entire future of totally autonomous robots amounts to novel pattern
generation by a set of artificial neural nets, typically within the context of
sensor inputs applied to them, followed by the selection of the most appropriate
of these action sequences by yet another set of such networks. This creative
brain-storming session between neural nets is what we call Creativity
© 1997-2020, Imagination Engines, Inc. | Creativity Machine®, Imagination Engines®, Imagitron®, and DataBots® are registered trademarks of Imagination Engines, Inc.
1550 Wall Street, Ste. 300, St. Charles, MO 63303 • (636) 724-9000