Creative Control of Robots in Virtual Reality

The Neural Dancer, 1998

1998, The Neural Dancer

IEI's creative robots program began in 1998 with the so-called "Neural Dancer." To make a long story short, the company wanted to produce realistic animations that did surprising things without the need for extensive programming. To achieve a proof-of-principle experiment, a human volunteer assumed 12 distinct poses as a machine vision application captured the locations of their joints and extremities. Then IEI's first Creativity Machine patent was applied to the problem of generating the random dynamics of the resulting 33 degree of freedom stick figure, with the generator, called an "imagitron" running recurrently as the discriminator selected for unique pose sequences. In the figure to right, the stick figure invents realistic kinematic sequences often originating poses the human volunteer never actually assumed.

Shortly thereafter, the military saw this animation and realized how simple and straightforward this methodology was. A robotic control system could be exposed to kinematic states of a complex robot and the Creativity Machine within it could produce plausible action sequences as a discriminator chose and implemented the most appropriate reactions to presented circumstances.


Tabula Rasa Roach, 2000

2000, The Tabula Rasa Roach

Fast forward to 2000, and the same military sponsor asked if such Creativity Machine based control systems could be used to teach arbitrarily complex robots to walk and self-invent a range of other locomotive strategies. So, provided only the geometrical specification for the robot, here a 36 degree of freedom roach crawler, the insectoid robot was able to learn to walk in just a few seconds. As one views this historic sequence, the robot first flails about attempting to struggle against the simulation's artificial gravity and friction, shortly thereafter devising a standing posture. With further self-experimentation over a few more seconds, the robot scampers off to the right.

Tabula Rasa Roach, 2000With continued training, this “roachbot” learned to run forward, as well as backward. Provided the motivation, the robot could run even faster using a bipedal strategy, both in the upright position as well as in a ‘handstand’ posture. Entomologists watching these simulations agreed that they were indeed realistic roach strategies. Under threat these insects can assume an upright running position using the air pocket in front of them for the necessary support. In fact, these roachbots could improvise a wide range of locomotive strategies in response to their own self-generated navigation fields, approaching various targets while evading obstacles and threats along the way.


Invading Hypothetical Facilities in Virtual Reality, 2003

2003, Invading Hypothetical Facilities in Virtual Reality.

Impressed with these experiments in virtual reality, the customer supplied a series of hypothetical underground facilities, into which the roachbot (in red) was required to cleverly enter, say through a ventilation shaft or window. In the short video to right, the virtual robot coordinates its virtual sonar returns with its improvised leg movements to traverse a hallway within the facility. After 30 minutes, the bot was able to explore all three floors of the facility shown, in the process even inventing the necessary leg motion to climb stairs! At one point, the robot was able to scale the blue coordinate system (which it considered real) and crawl upside down along its horizontal axes!!

Note that the leg motion is not the exercise of artistic license. The robot’s movements are the result of its self-conceived leg motions. In effect, this is equivalent of a cartoon character first learning to stand, walk, and strategize where or how it is going next, within simulated game physics. At the heart of this control system is a Creativity Machine, determining lengthy motion sequences based upon cumulative inputs to its virtual sensor suite.


Roachbot Swarm Maps Randomly Generated Mazes, 2005

2005, Roachbot Swarm Maps Randomly Generated Mazes

In view of the single roach bot's successful performance in a range of hypothetical facilities, IEI demonstrates that a swarm of these insectoid crawlers can cooperatively map a facility with minimal overlap between robots. Here, six robots cooperatively map a randomly generated maze, communicating via TCP/IP. The developing blueprint reveals the facility's layout, while the grayscale map depicts where the robots have not yet been, a map that is available to all bots within this collective intelligence.

Later, similar roach swarms infiltrated models of actual underground facilities, while devising optimal methodologies for neutralizing them. Simulated mine sweepers working collectively in virtual reality were able to detonate IEDs without endangering other swarm members. More advanced virtual robots were able to skillfully manipulate simulated hazards (i.e., weapons and ordnance) and remove them to a safe distance.


12-Wheeld Robot Learns from Blank Slate, 2005

2005, 12-Wheeled Robot Learns from Blank Slate

Asked by NASA-Langley to allow a robot with three omnidirectional wheels to learn from a blank slate, we did just that, within 30 seconds. Then, ironically, we were able to repeat that accomplishment using a twelve-wheel equivalent, in roughly the same time frame!

In the sequence shown to right, the robot randomly experiments with different motions of its 12 omnidirectional wheels. Then the robot is tested to see if it can coordinate its wheel rotations to walk itself in a straight line, quite the compromise when one realizes that some of these wheels are resisting any straight-line motion. Then the robot is directed to execute pure rotational motion, and it does so, pivoting in place and not translating one iota!

This and other VR robots were able to utilize virtual cameras to spot various targets in their environment and then capture and return them to predesignated locations.


Special thanks goes out to the Air Force Research Labs (AFRL/MNAV) for making this research possible under its Phase II SBIR, "Creative Robots to Defeat Deeply Buried Targets"