3.01.2009

An Emotional Observation

Can your emotions get involved in a video game? Yes, but not much. Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it's probably not remotely close to the feeling of triumph you'd get from saving the world in real life. I've played video games powerful enough to bring tears to my eyes, but they still aren't as powerful as the feeling of significantly helping just one single real human being.

Because when the video game is finished, and you put it away, the events within the game have no long-term consequences.

Maybe if you had a major epiphany while playing... But even then, only your thoughts would matter; the mere fact that you saved the world, inside the game, wouldn't count toward anything in the continuing story of your life.

Thus fails the Utopia of playing lots of really cool video games forever. Even if the games are difficult, novel, and sensual, this is still the idiom of life chopped up into a series of disconnected episodes with no lasting consequences. A life in which equality of consequences is forcefully ensured, or in which little is at stake because all desires are instantly fulfilled without individual work - these likewise will appear as flawed Utopias of dispassion and angst. "Rich people with nothing to do" syndrome. A life of disconnected episodes and unimportant consequences is a life of weak passions, of emotional uninvolvement.

Our emotions, for all the obvious evolutionary reasons, tend to associate to events that had major reproductive consequences in the ancestral environment, and to invoke the strongest passions for events with the biggest consequences:

Falling in love... birthing a child... finding food when you're starving... getting wounded... being chased by a tiger... your child being chased by a tiger... finally killing a hated enemy...

This is the aspect of self-modification in which one must above all take care - modifying your goals. Whatever you want, becomes more likely to happen; to ask what we ought to make ourselves want, is to ask what the future should be.

Add emotions at random - bind positive reinforcers or negative reinforcers to random situations and ways the world could be - and you'll just end up doing what is prime instead of what is good. So adding a bunch of random emotions does not seem like the way to go.

Asking what happens often, and binding happy emotions to that, so as to increase happiness - or asking what seems easy, and binding happy emotions to that - making isolated video games artificially more emotionally involving, for example.

On a higher, more abstract level, this carries over the idiom of reinforcement over instrumental correlates of terminal values. In principle, this is something that a purer optimization process wouldn't do. You need neither happiness nor sadness to maximize expected utility. You only need to know which actions result in which consequences, and update that pure probability distribution as you learn through observation; something akin to "reinforcement" falls out of this, but without the risk of losing purposes, without any pleasure or pain.

An agent like this is simpler than a human and more powerful - if you think that your emotions give you a supernatural advantage in optimization, you've entirely failed to understand the math of this domain. For a pure optimizer, the "advantage" of starting out with one more emotion bound to instrumental events is like being told one more abstract belief about which policies maximize expected utility, except that the belief is very hard to update based on further experience.

But it does not seem to me, that a mind which has the most value, is the same kind of mind that most efficiently optimizes values outside it. The interior of a true expected utility maximizer might be pretty boring, and I even suspect that you can build them to not be sentient.

For as far as my human eyes can see, I don't know what kind of mind I should value, if that mind lacks pleasure and happiness and emotion in the everyday events of its life. Bearing in mind that we are constructing this Future using our own preferences, not having it handed to us by some inscrutable external author.

If there's some better way of being (not just doing) that stands somewhere outside this, I have not yet understood it well enough to prefer it. But if so, then all this discussion of emotion would be as moot as it would be for an expected utility maximizer - one which was not valued at all for itself, but only valued for that which it maximized.

But hey, what do I know, right?

1 comment:

  1. Quite a myriad of emotions going on in this blog, eh!! Sentiency drives much of societal recourse. Within the continuum of emotional drive you allude to having need for in life, I think thought also needs to be given to purpose. Each of us needs to feel connected to something, someone, some greater power that navigates our journey through life. Purpose allows focus and attention to detail, navigating a sentient path of individual focus and brings happiness, pleasure and enjoyment. The degree to which we remain attuned determines optimization and yield. It defines the beauty in our lives and opens our eyes to the simplest of tasks. Purpose aligns with values and drives the impact we have and what we contribute to living life. All of this together, impacts how we touch, interact and experience others and the footprint we leave behind. It implies work but is gratifying. This is the mind, the individual, partner with whom I choose to walk and that I value!!

    ReplyDelete

So, what do you think?