Imagination Engines, Inc., Home of the Creativity Machine
Home

HOME OF THE CREATIVITY MACHINE 

The Big Bang of Machine Intelligence!

Imagination Engines, Inc., Home of the Creativity Machine
The simple
  • Three Generations of Creativity Machines

    The simple, elegant, and inevitable path to human level machine intelligence and beyond, the Creativity Machine Paradigm, US Patent 5,659,666 and all subsequent foreign and divisional filings.

     

IEI IN THE NEWS!
10-11-2019
Wall Street Journal: Can an AI System Be Given a Patent?

8-1-2019
Fast Company: Can a robot be an inventor?

8-1-2019
BBC: AI system 'should be recognised as inventor'

8-1-2019
Financial Times: Patent agencies challenged to accept AI inventor

8-1-2019
Futurism: Scientists are trying to list AI as the inventor on a new patent

7-25-2019
The Disruption Lab: The disruption that is DABUS: Beyond AI

1-16-2019
ACT-IAC: The dawn of conscious computing

11-8-2017
WIRED: This artificial intelligence is designed to be mentally unstable

 

 

Imagination Engines (a.k.a., "Imagitrons")

Summary - In 1975, our founder made an amazing discovery: If an artificial neural network is trained upon all that is known about some realm of knowledge and then internally 'tickled' at just the right level, it outputs new, potential ideas (i.e., novel or confabulation patterns) that are based upon its accumulated training patterns. Such a neural discovery engine is amazing in that it takes only seconds or minutes to form itself. Once combined with another neural network that models human perception of what constitutes a novel or salient idea, these tandem neural nets may brainstorm new and potentially valuable ideas within a generative neural architecture called a "Creativity Machine."

Details - An Imagination Engine may be formed from any number of trained artificial neural networks, whether monolithic, recurrent, or deep, that are stimulated by any combination of internal or external noise to generate new ideas and/or plans of action.  These generative neural architectures are an outgrowth of scientific experiments conducted in 1975 by our founder, Dr. Stephen Thaler. In these initial investigations, neural networks were trained upon a collection of patterns representing some conceptual space (e.g., examples of music, literature, or known chemical compounds), and their internal connections varied by small, random amounts. Astonishingly, Thaler found that if such synaptic tickling was of sufficient strength, the network's output units would predominantly activate into patterns representing new potential concepts generalized from the original training exemplars (i.e., new music, new literature, or new chemical compounds, respectively, that it had never been exposed to through learning). In effect, the network was "thinking out of the box," producing new and coherent knowledge based upon its memories, all because of the carefully 'metered' noise being injected into it. From an engineering point of view, this is quite phenomenal: a neural network trains upon representative data for just a few seconds and then generates whole new ideas concepts/strategies based upon that brief learning experience. In effect, we quickly and economically create an engine for invention and discovery within focused knowledge domains.

To illustrate this phenomenon within a single neural net, we have represented the perturbation level as a 'temperature' in the figure to the right. If the perturbation level is below a critical point called X (the Greek letter Xi), the network preferentially outputs learned, or 'rote' memories. Just above this critical point, the network predominantly outputs novel patterns that represent potentially plausible ideas. Finally, at high levels of perturbation, the network outputs nonsensical patterns. (For more mathematical details on this process, see A Quantitative Model of Seminal Cognition: The Creativity Machine Paradigm.)

Of course activation patterns of neurons do not qualify as valuable ideas until they are perceived as such, either by individuals or societies. To this end, Thaler added what is known in neural network parlance as a perceptron, the very kind of network that has been successfully used over decades to simulate human perception and opinion formation. Doing so, imagination engines could brainstorm with perceptrons on nanosecond time scales to generate new ideas and/or action plans.

As you might have already guessed, we have named our company Imagination Engines to celebrate the immense power of such internally perturbed neural networks.

Suggested Additional Reading

Yam, Philip(1995). "As They Lay Dying ... Near the end, artificial neural networks become creative", Scientific American, May, 1995.

Thaler, S. L. (1996). "Neural Networks That Create and Discover," PC AI, May/June 1996.

Holmes, R. (1996). "The Creativity Machine," New Scientist, 20 January 1996.

Thaler, S. L. (1998). Predicting ultra-hard binary compounds via cascaded auto- and hetero-associative neural newtorks, Journal of Alloys and Compounds, 279, 47-59.

Cohen, A. (2009), "Stephen Thaler's Imagination Engines," World Future Society, July 9, 2009.

Thaler, S. L. (2013). The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship, (ed.) E.G. Carayannis, Springer Science+Business Media, LLC.

New Page 1



© 1997-2020, Imagination Engines, Inc. | Creativity Machine®, Imagination Engines®, Imagitron®, and DataBots® are registered trademarks of Imagination Engines, Inc.

1550 Wall Street, Ste. 300, St. Charles, MO 63303 • (636) 724-9000