Some thoughts on singularity, AI, and the simulation of consciousness

If you know me, and have worked with me in the past, you’d know that I’m a futurist at heart, always keeping up on trends and generally looking for ways to implement technology for the betterment of mankind.  I have been working on an approach to AI that I think is a bit different from most.

It starts with a choice

About 18 months ago it came to me that economics and psychology, with a bit of philosophy play a big part in the creation of AI – specifically the idea of a scarcity and choice.  While I would love to someday call up earl grey (hot) tea from my “magic” machine that turns energy into matter, we’re a long ways off from that.  We live in a world of scarce resources and limitations that define who we are.  Our computer programs aren’t aware of this fact though, happily churning along until they run out of resources and destroy the world they live in (crashing our OS).  So I started experimenting with this idea…

The first thing I did was distill life down to its basic essentials – to consume energy and to prolong life.  For biological creatures we have an evolutionary flaw in that growing up requires growing old. For an AI, this isn’t an issue, so we can eliminate reproductive functions form the evolutionary chain.  So we’re left with energy consumption as the main facet of existence.  I settled on a basic belief system of positive values that would allow my program to make choices:

  • Energy is good. Having more battery power is an absolutely necessary requirement for life.
  • Hard drive space is good, do not allow anything to overflow the hard drive in order to preserve space for future growth
  • CPU utilization allows for more active growth and learning. Minimize CPU usage while learning about yourself.

So this was a great start – I created a very simple app that held these virtues, and gave it the ability to kill any unknown application.  Boom, the computer shuts down to OS and reboots.  Not good.  My next act was to create a mutable fact table that showed symbiotic lifeforms (programs) that were ether friendly or required for survival.  Anything that matched the pattern was allowed to live.  Viola – great anti-virus program in the making.  I started feeding my app viruses and it would kill anything that didn’t match a preset pattern that I provided.  First act solved, a semblance of survival.

Self-improving

I began working on a learning algorithm that would allow my creation to better match programs.  I started looking into machine learning, but I kept getting caught up in the tech so I just rolled my own that I thought would work for tests.  I was able introduce a new pattern into the equation, and then have it analyze location, CPU utilization and HDD size consumption to attempt to deduce if it was hostile or not.  I set thresholds on what was benign and sure enough, it was able to pick up patterns and start adding to the fact tables correctly.  The pattern recognition was created in a spin off of the main app, creating a specialized learning replant of itself, to which I gave it the ability so spin off N number of instances in order to keep up with demand.

A problem of scope

I hit a snag that I was unable to overcome – the app would happily respond to external stimuli that I introduced, and then sit idle forever.  A sort of happy communistic society that would lay on the beach soaking up rays until it starved to death.  So after about playing until the last half of last year, I put the experiment away, marking it as a novelty that I would not find a real answer to unless I could focus on full time, and no way to make that happen in the next few years.

The other day I was speaking with a friend on the general decline of society and the idea of people removing themselves from the gene pool came up, a sort of ode to the Darwin awards and Idiocracy, and its implications on modern sociology.   The answer came to me!

Negative values are as important as positive ones

The fear of death is real in all living things.  We fight to survive, and we fight to keep our species alive.  Computers won’t do this, but it’s only because we haven’t given them the tools to do so.

For a software simulation, this means taking our rules (that energy, HDD, and CPU are important) and putting guidelines on why they are important.  Running out of these resources must have consequences, and the program must be taught to respect and maybe even fear these rules. I use fear loosely – to be aware of a truth and to act in accordance of the truth isn’t fear.  I don’t eat because I fear starving to death (or do I?)

So I am back to almost square one.  My decision is to give value to a program based upon power usage, and allow my application the choice of executing an action based on the power consumed by the action vs the outcome.  Sort of the computer equivalent of not eating iceberg lettuce for nutritional value. In order to do this I’m swapping over to Ubuntu and going to use powerstat (https://launchpad.net/~colin-king/+archive/ubuntu/powermanagement). I’ll then retool my fact table analysis and modification to take in past understanding of output and time taken for an action and adjust behavior appropriately.  Why do I care about this?  If an action takes 10 minutes historically and the battery will run out in5, that’s a bad choice to make.

Virtual Bartering

After my negative values are programmed and tested, I’m going to have to create a form of trade in order to continue testing.  My current thought is to have two systems interface with each other, one with a hard line to the wall for power, and the other with a battery.  The battery powered computer will be given the option of completing tasks, and given more power as a reward.  I haven’t a clue on how to make this happen yet, but I have a few friends who are electrical engineers, I may implore a few of them on how to rig things up.  This will probably take a few months, so I won’t talk about this much, but it all falls back on OODA loops and Metzinger’s idea of the self (http://phantomself.org/metzinger-being-no-one/) so I feel like the mystique is taken out of the problem, it’s just a matter of finding a way to automate it.

The neocortex and Evolution

One last bit I’ve been playing around.  I have this hive mind that thinks and acts as a single unit.  It shares a data bank that all instances of the application have full access to, but most of them only care about an abstraction layer that results in the information that it cares about ( similar in function to a neural net, but far far different in form.)  I think that in order to really make this work, I have to allow every individual app to learn on its own, and then report back a set of data.  In order for the main memory set to allow a change, either a number of confirmations of the assertion, or a lack of contradictions need to be allowed.  For instance if I tell it to kill itself, and the action is recorded, but no benefit is found, that would need to be logged and queried by future apps as a potentially contradictory action to survival.  Then I could allow choices to be declined when it’s not in its own self-interest.  Again, not sure how I’m going to do this yet, but I’ll keep working until I get it right.

Next time I’ll get back to Event Store, but this has all been quite an influence on my thinking the last couple of weeks.  I have been white boarding and diagramming and am about ready to start over, this time with a fresh perspective on the problem. Wish me luck!

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s