Wednesday, August 27, 2008

Evolutionary Computing

As an unpractical futurist the concept of singularity (the theoretical future point of technological advancement in which the ability for software to improve itself using artificial intelligence is archived) is an idea that has been of great interest to me for a long time. In order to create a self improving cloud computing systems (autonomic computing) you first need to look at what "life" is and how it can be applied to computing.

Life doesn't necessarily have to be self-aware in order to be alive. A single cell bacteria is arguably just as alive as my dog Winston and Winston just as alive as a human. Whether an application is simple or complex isn't important either, the common thread among all life forms is its ability to reproduce and adapt. The more important aspect is that of the life cycle; birth and death, mutation and evolution. In order to enable this type of life cycle computing (evolutionary computing), we need to create a software system capable of creating its own source code and then being apply patches to itself, then repeating the process over and over. The system should be capable of seeing any quantitative changes for better or worse overtime in each iterative version. These improvements could be a kind of artificial evolutionary process where certain branches may result in dead ends and where other branches may evolve into improved versions of the software. It should also be able to examine other source code as a basis of comparison and apply certain aspects when and if needed. (As a developer its easier to go modify some else's code then to create it from scratch.)

To provide some background, the Seed AI theory referred to the concept of recursive self-enhancement and is a key aspect of superintelligence (superior intelligence when compared that of a human). But in my opinion; intelligence is not as important as the ability to be performance aware. I'd rather have a system capable of understanding that a core component isn't running in a optimal way, then attempt to apply a series of patches until it fines a better more efficient way. As humans, we tend to find solutions to problems based on trial and error, so why not give our software the same freedom. The software should also be able to understand past failures and be able to determine that certain directions may not have worked. But it also should be able to understand that certain aspects of a previous branch that failed could also potentially be useful in other successful branches.

The biggest issue other then the obvious "how", is security related. This where the story starts to sounds a little big like science fiction. Hypothetically these types of systems could become incredibly powerful and the biggest threat they will face will be human. Embedding rules of conduct such as Isaac Asimov "Three Laws of Robotics" could be easily removed because of the evolutionary nature of the system. Thus controlling the system will start to look more like partnership. This type of evolutionary, self improving, self adapting, and self replicating technology could improve almost all aspects of technology, but with great power comes great responsibilities. Once the cat has been let out of the bag, it will be impossible to ever go back.

So will it ever happen? Arthur C. Clarke formulated the following three "laws" of prediction:

1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
3. Any sufficiently advanced technology is indistinguishable from magic.

Will we archive "singularity" some day? Certainly. Will we be able to control it? I doubt it.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram