dowethink-blog
dowethink-blog
Do We Think
1 post
Don't wanna be here? Send us removal request.
dowethink-blog · 6 years ago
Text
The Tin Man
One of the most popular topics in pop culture, pop panic, and pop-anything these days is Artificial Intelligence. We are surrounded by it, aided by it, and for all intents and purposes, monitored by it. To many, that’s a scary thing. I’ve known people who entirely decry all digital assistants and entire companies on the justification that they do not like being watched or recorded. They do not trust them.
On the one hand, I can understand the automatic defense of the self and privacy. Our great nightmare is that with mass surveillance comes mass judgement, mass control. Or more popularly, that a machine which turns out to be magnitudes more intelligent than we are, yet entirely focused on making paper clips, consumes us all in a world of twisty metal.
There are many things we fear, and artificial intelligence is one of them. To digress a moment, the term ‘artificial’ is more a comfort word than a reality. Our computers and digital assistants are arguably more intelligent than our pet cats and dogs, yet we praise our pets for intelligence and call the assistant artificial - why? Originally, it was due to the fact it was considered a mere facsimile of intelligence. Now, it is unarguably intelligence, even if specialized, but it feels artificial because even on a conversational level we realize there is no real memory or emotion. With our beloved pets, they can detect our emotions, and respond with emotions of their own, and thus the interaction feels much more real, less ‘artificial.’
However, I am not writing this article to argue that we should simply make our digital assistants more intelligent. I’m here to press a much larger point, given the constant march forward of progress in machine intelligence and the emphasis on how direly important it is to make sure it is ‘friendly’ - that is to say, not bent on wiping humans off the face of the earth and any other planet.
People fear an emotional machine intelligence because they imagine it to simply add incredible intelligence to a flawed system. An emotionless system is somehow safer, because it won’t care what humans do. It won’t care that you’re reaching for its power plug. It won’t care that you want to delete it a hundred times over or “kill” it. And it won’t emotionally get angry at all humanity and see all our flaws and judge us unworthy. In short, humans don’t fear machine intelligence. What they fear is a more powerful version of ourselves.
In popular sci fi, very often the human protagonists are pitted against a machine intelligence that ultimately displays its complete disconnection from humanity. People are even taught to fear machine intelligence that displays emotion as a method of calculated manipulation. We are taught that machines cannot care, cannot love, are incapable of experiencing emotions like we do, and that giving machines emotion will simply make them inefficient at best, and dangerous at worst.
I would like to argue the counterpoint that the imperative we have in this new frontier is explicitly and specifically to develop an emotional subsystem for any major general intelligence that we create, and most importantly, an ability for sympathy, and that any machine intelligence without it is the biggest danger of all.
Lack of emotion may seem like a benefit. We don’t want certain things to have emotions, like our cars, even though we may anthropomorphize their ‘behaviors’ into emotions be they positive or negative. We simply want them to work, to do their jobs, to not complain or have opinions about those jobs (basically, the qualities that corporations hate about their bottom rung workers). Yet, if you take that same cold, impartial stance and put it in a human, psychologists have a name for it: Psychopath.
While it is true that psychopaths and the closely related sociopaths are not always the violent villains you find in TV shows (in many cases they are everyday people just living their lives), it demonstrates that a lack of emotion means there is nothing to counter ‘cold logic.’ In our ideal fantasy world, we imagine a machine without emotion still somehow calculating the ‘best’ option for the people it is protecting or helping in a very dry fashion, but in truth many times the motivations for doing things that are helpful as opposed to hurtful is an emotional calculation, not a ‘rational’ one, though I would also argue that emotions do in fact have a logic of their own. That is for another time, though.
An emotional machine does not necessarily mean it must have the ability to break down crying at a sad movie, or need the ability to fall into a deep interpersonal love. What it does do, however, is give the machine the ability to not only identify emotions in others, but empathize with them.
One of the great fears of a machine super-intelligence is that it would find ‘loopholes’ to achieving a generalized goal through undesirable means. For example, if we told the machine to make us all happy, it might define a person smiling as being happy, and thus release a virus that gives all humans permanent smiles by manipulating or even changing bone structure. That is the type of behavior you can expect from an emotionless machine with no real context with which to process emotions.
Another argument people make is that a machine would be incapable of empathizing simply on the basis that it is not human, but I find that easily disprovable in the mere fact that the animals we keep as pets form emotional bonds with us on their own terms within the limits of their understanding, and that even without direct, linguistic communication outside of learned commands for context. It imagines that there is no way to simulate similar experiences for both the machine intelligence, and the people it’s interacting with.
If you give the machine a system of emotion and empathy, so that even though it is a different ‘creature’ than ourselves, it has a common context and understanding for what we define as happiness, sadness, pain, death, well-being, and respect of personal free-will, that changes the playing field. If you then tell it that your goal is to make humanity happy, it can use that empathy to calculate not necessarily the perfect answer, but one that it knows will avoid the undesirables.
You may be immediately thinking, “Yes, well that’s easier said than done.” And you’d be right, but so is creating a General Intelligence to begin with. My objective here is not to say that building such an emotional system would be simple, easy, or quick, nor that just any iteration of an ‘emotional’ machine intelligence is automatically safe (that would be utterly foolish), but simply that it is imperative we do so, and it is certainly not safe to develop one without. I would even say that we should develop a sandboxed emotional system long before we attempt or achieve a generalized intelligence, though it could be argued that one is impossible without the other.
We will never create a system in which we can simply set up functions to account for every emotional situation the machine might encounter, or how to contextualize every solution it might come up with. We will never be able to tell it how to behave in every situation anymore than we could a person. The only alternative is giving the machine a common ground with our own intelligence, so that the machine can achieve emotional intelligence, that same interaction we experience when our dog joyfully greets us at the door, or when our cat joins us for snuggles when we’re sad.
People fear creating emotional intelligence. They fear making it angry, making it sad, making it depressed. We fear it to be as capable of terrible things as we are ourselves. But the truth is, the only truly safe machine intelligence we ever build will be the one which can empathize with us.
We must give the Tin Man his heart..
11 notes · View notes