In 1995, IEI began experimenting with text-to-image, as well as image-to-image transformations using multilayer perceptrons and deep nets. Although these efforts were highly proprietary in nature, the most interesting and well publicized efforts were focused on what happened when the input patterns to these nets were pinned at constant values and internal disturbances were introduced to the net's connections. We found that at a particular range of disturbance level, a 'Goldilocks zone' if you will, these nets began to output novel images that were very reminiscent of their original training sets. In other words, the nets were generating very plausible one- and two-off variations on what they had previously learned, without recourse to complex human-conceived algorithms!
Thereafter, this is how events unfolded...
After detuning the CM through excess mean synaptic damage, it too begins to output nonsense.
This demonstration predates similar, well-publicized efforts in 2013 in which small patches of a scene were converted to alternative imagery. Though just light entertainment following a presentation on 'twisted' cyber technologies, it suggested how perception in both humans and AI could be easily manipulated. Soon after, various government agencies offered contracts, achieving the same in both imagery and natural language generation.
In the figure below, we see what happens when DABUS is run at various levels of synaptic disturbance. In the low noise regime, the system slightly hybridizes and mutates stored memories of faces, for example producing imaginary members of the Einstein clan (left). Then progressively raising the perturbation level, much greater diversity is seen wherein hybridization occurs between human and non-human faces.
Interesting to note that DABUS was allowed to carry out image-to-text processing wherein many of these hybridized faces became associated with their autonomously generated captions. For instance, DABUS named the fictitious Einsteins Bruce, Hillary, and Curly, respectively from top to bottom!
By 2012, DABUS had seen thousands of photographs and was able to synthesize many new imaginary scenes. Note that these were full 640x480 pixel, 24 bit depth renderings and not attempts at replacing small patches of existing photos with substitute imagery. In other words, each of these images formed in parallel. Furthermore, the system cumulative learned what appealed most to it, thereafter generating many variations on its favorite themes.
2014: Basement Cleanup
Generated and named by DABUS.
2014: Industrial Paradise
Generated and named by DABUS.
2014: Locomotive Bubble Bath
Generated and named by DABUS.
2014: Sidewalk Beggar
Generated and named by DABUS.
2015: Fish Dream
Generated and named by DABUS.
2017: At Your Own Risk
Generated and named by DABUS.
Out of pure curiosity we allowed the random snipping of connections within DABUS to simulate a dying brain, thus generating some very interesting imagery. The system was also able to name the resulting artwork, at least at the early stages of neural destruction. Copyright notices were added to them in 2016 as these and more trauma-induced images appeared in Urbasm under the title, "Artificial Intelligence – Visions (Art) of a Dying Synthetic Brain."
2012: Recent Entrance
2012: Image with generated title.
2012: Guardian Greeter
2012: Image with generated title.
2012: No Tranquility at All
2012: Image with generated title.
2012: Retired Hell Admin Bldg
2012: Image with generated title.
Below we see some of the many elements contributing to the trauma-generated "A Recent Entrance to Paradise," most importantly the zone plate image which acts as a so-called "hot button" to initiate simulated neurotransmitter surge and the selective reinforcement of the entire chain of neural nets whose cleverly merged memories constitute this image. In effect, the zone plate became an impactful, artificially introduced memory, tantamount to recollections of existential events stored in our brains (i.e., falling, hunger, thirst, satiation, etc.). In humans, such hot button memories are congenital and acquired through generations of Darwinian natural selection.
So, no human guided the formation of this image. Instead, a human placed a net containing false memories of pain or pleasure within the DABUS system, in the same way nature installs certain primordial thoughts in our brains that guide cognition. After that, DABUS parted ways with humans, autonomously creating on its own.DABUS was allowed free run, the equivalent of what humans might call free association. Below, one may see a stream of internal imagery with scenes gracefully blending into the next. Sometimes, more radical shifts occur.
So, make no mistake. These works of art are not the products of deep learning or adversarial nets. They were generated by Creativity Machines and DABUS long before these technologies, with one image in particular, "A Recent Entrance to Paradise," forming the basis of legal debates around the planet as to whether AI systems may own the the copyrights to the art they create.
Most important to realize is that DABUS is not being used as a tool to assist human artists. What emerges from this system are the thoughts of an autonomous and feeling machine intelligence!