Home

HOME OF THE CREATIVITY MACHINE 

The Big Bang of Machine Intelligence!

  • Basics


  • Founder


  • Contact


  • Opportunities


  • IEI Patent Overview

    The simple, elegant, and inevitable path to human level machine intelligence and beyond, the Creativity Machine Paradigm, US Patent 5,659,666 and all subsequent foreign and divisional filings.

Join Dr. Stephen Thaler's discussion group "AI's Best Bet."

Join Dr. Stephen Thaler's circle at Google+.

AI's Best Bet LinkedIn Discussion Group

IEI's Patented Self-Training Artificial Neural Network Object

Summary - Be careful when AI practitioners state that neural networks are self learning! What they really mean to say is that a script (i.e., a training algorithm) is governing the development of connections between neurons so that the neural net as a whole can absorb knowledge. So, the neural network can certainly become wiser, but it isn't in any way mentoring itself. Instead, it is being trained by an external computer algorithm. In 1996, with the assistance of a Creativity Machine, our founder discovered how to build a new kind of neural network that learned without recourse to such a script. In effect, it was a neural network (neurons, and connections, sans the traditional training algorithm) that could spontaneously absorb knowledge. ...Oh, and by the way, it was orders of magnitude faster and more efficient than any other neural network learning scheme, enabling neural networks having billions of connection to train on mere Pentium-based PCs. Furthermore, collections of such "Self-Training Artificial Neural Network Objects" could connect themselves into compound neural networks, what we call SuperNets. In effect, brains could now self-assemble themselves in silicon!

Details - When a neural network expert talks about training a net, what he or she means is that a very cleverly contrived computer algorithm is being used to 'mathematically punish and reward'  the connection weights within that neural net, forcing it to accurately absorb memories and complex relationships inherent within its training patterns. Typically, these training algorithms, based upon partial differential equations, take the form of an explicit algorithm that is readable by a neural network savvy computer programmer. Note that once the trained neural network is detached from this training algorithm, the network can no longer be trained, and hence cannot adapt to new data within its environment.

IEI neural networks break this paradigm completely in that no human-invented mathematics are involved. We simply combine an untrained neural network (i.e., a trainee) with a network that has learned by example how to train another net (i.e., a trainer). Furthermore, we weave trainer and trainee networks together so as to create a monolithic neural network that automatically trains when introduced to data. We may build class templates (i.e., cookie-cutters for more of these self-learning nets) through which we may instantiate hundreds or thousands of these so-called "Self-Training Neural Network Objects" or STANNOs on common PCs. Working as a cooperative swarm, these STANNOs may collectively exhaust all potential discoveries concealed within vast databases or even the Internet. If, instead of using pre-trained neural networks in the Creativity Machine, we use STANNO modules, we produce a neural architecture that can learn from its own successes and failures.

STANNOs may also connect themselves into impressively large, compound neural networks. Rather than think of the fundamental processing units as neurons, the building blocks of these immense neural structures are actually smaller neural networks. If each of these sub-networks can, for example, model the behavior of some hardware component (i.e., electrical devices such as transistors, capacitors, resistors, etc.) that may then spontaneously connect themselves into a functioning, virtual systems (i.e., a transistor radio or digital computer) showing us the required topology between components to attain the desired function. Further, if each sub network represents some fundamental analogy base, these neural modules may now spontaneously connect themselves into human-interpretable theories.

In effect, STANNOs may effectively dock and interconnect with one another so as to form brain-like structures, wherein we observe the loose division of labor among component networks, as in the human brain. In this way, we allow large collections of STANNOs to 'grow' complex brain pathways for use in our advanced machine vision systems and robotic control systems. Rather than being simple neural cascades composed of interwoven neural network modules that are pre-trained and static, these SuperNets, as they are called, are composed of individual neural network modules that are training in situ, in real time.

STANNOs also form the basis of our highly advanced and flexible neural network trainer called PatternMasterTMTM, allowing users to produce trained neural network models at unprecedented speeds. Because of their extraordinary efficiency, they can handle immense problems even on desktop PCs, easily dealing with genomics-sized problems having millions of inputs and outputs. Furthermore, using STANNOs, the neural network model can be interrogated even as it trains!




© 1997-2012, Imagination Engines, Inc. | Creativity Machine®, Imagination Engines®, and DataBots® are registered trademarks of Imagination Engines, Inc.