Against AGI
I don't like the term "AGI" (Short for "Artificial General Intelligence").
Essentially, I think it functions to obscure the meaning of "intelligence", and that arguments about AGIs, alignment, and AI risk involve using several subtly different definitions of the term "intelligence" depending on which part of the argument we're talking about.
I'm going to use this explanation by @fipindustries as my example, and I am going to argue with it vigorously, because I think it is an extremely typical example of the way AI risk is discussed:
In that essay (originally a script for a YouTube Video) @fipindustries (Who in turn was quoting a discord(?) user named Julia) defines intelligence as "The ability to take directed actions in response to stimulus, or to solve problems, in pursuit of an end goal"
Now, already that is two definitions. The ability to solve problems in pursuit of an end goal almost certainly requires the ability to take directed actions in response to stimulus, but something can also take directed actions in response to stimulus without an end goal and without solving problems.
So, let's take that quote to be saying that intelligence can be defined as "The ability to solve problems in pursuit of an end goal"
Later, @fipindustries says, "The way Im going to be using intelligence in this video is basically 'how capable you are to do many different things successfully'"
In other words, as I understand it, the more separate domains in which you are capable of solving problems successfully in pursuit of an end goal, the more intelligent you are.
Therefore Donald Trump and Elon Musk are two of the most intelligent entities currently known to exist. After all, throwing money and subordinates at a problem allows you to solve almost any problem; therefore, in the current context the richer you are the more intelligent you are, because intelligence is simply a measure of your ability to successfully pursue goals in numerous domains.
This should have a radical impact on our pedagogical techniques.
This is already where the slipperiness starts to slide in. @fipindustries also often talks as though intelligence has some *other* meaning:
"we have established how having more intelligence increases your agency."
Let us substitute the definition of "intelligence" given above:
"we have established how the ability to solve problems in pursuit of an end goal increases your agency"
Or perhaps,
"We have established how being capable of doing many different things successfully increases your agency"
Does that need to be established? It seems like "Doing things successfully" might literally be the definition of "agency", and if it isn't, it doesn't seem like many people would say, "agency has nothing to do with successfully solving problems, that's ridiculous!"
Much later:
''And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower."
Again, let us substitute the given definition of intelligence;
"And you may say well now, being capable of doing many things successfully is fine and all but there are limits to what you can accomplish with the ability to do things successfully, even if you are supposedly much more capable of doing things successfully than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, the ability to do many things successfully is not this end all be all superpower."
This is... a very strange argument, presented as though it were an obvious objection. If we use the explicitly given definition of intelligence the whole paragraph boils down to,
"Come on, you need more than just the ability to succeed at tasks if you want to succeed at tasks!"
Yet @fipindustries takes it as not just a serious argument, but an obvious one that sensible people would tend to gravitate towards.
What this reveals, I think, is that "intelligence" here has an *implicit* definition which is not given directly anywhere in that post, but a number of the arguments in that post rely on said implicit definition.
Here's an analogy; it's as though I said that "having strong muscles" is "the ability to lift heavy weights off the ground"; this would mean that, say, a 98lb weakling operating a crane has, by definition, stronger muscles than any weightlifter.
Strong muscles are not *defined* as the ability to lift heavy objects off the ground; they are a quality which allow you to be more efficient at lifting heavy objects off the ground with your body.
Intelligence is used the same way at several points in that talk; it is discussed not as "the ability to successfully solve tasks" but as a quality which increases your ability to solve tasks.
This I think is the only way to make sense of the paragraph, that intelligence is one of many qualities, all of which can be used to accomplish tasks.
Speaking colloquially, you know what I mean if I say, "Having more money doesn't make you more intelligent" but this is an oxymoron if we define intelligence as the ability to successfully accomplish tasks.
Rather, colloquially speaking we understand "intelligence" as a specific *quality* which can increase your ability to accomplish tasks, one of *many* such qualities.
Say we want to solve a math problem; we could reason about it ourselves, or pay a better mathematician to solve it, or perhaps we are very charismatic and we convince a mathematician to solve it.
If intelligence is defined as the ability to successfully solve the problem, then all of those strategies are examples of intelligence, but colloquially, we would really only refer to the first as demonstrating "intelligence".
So what is this mysterious quality that we call "intelligence"?
Well...
This is my thesis, I don't think people who talk about AI risk really define it rigorously at all.
For one thing, to go way back to the title of this monograph, I am not totally convinced that a "General Intelligence" exists at all in the known world.
Look at, say, Michael Jordan. Everybody agrees that he is an unmatched basketball player. His ability to successfully solve the problems of basketball, even in the face of extreme resistance from other intelligent beings is very well known.
Could he apply that exact same genius to, say, advancing set theory?
I would argue that the answer is no, because he couldn't even transfer that genius to baseball, which seems on the surface like a very closely related field!
It's not at all clear to me that living beings have some generalized capacity to solve tasks; instead, they seem to excel at some and struggle heavily with others.
What conclusions am I drawing?
Don't get me wrong, this is *not* an argument that AI risk cannot exist, or an argument that nobody should think about it.
If anything, it's a plea to start thinking more carefully about this stuff precisely because it is important.
So, my first conclusion is that, lacking a model for a "General Intelligence" any theorizing about an "Artificial General Intelligence" is necessarily incredibly speculative.
Second, the current state of pop theory on AI risks is essentially tautology. A dangerous AGI is defined as, essentially, "An AI which is capable of doing harmful things regardless of human interference." And the AI safety rhetoric is "In order to be safe, we should avoid giving a computer too much of whatever quality would render it unsafe."
This is essentially useless, the equivalent of saying, "We need to be careful not to build something that would create a black hole and crush all matter on Earth into a microscopic point."
I certainly agree with the sentiment! But in order for that to be useful you would have to have some idea of what kind of thing might create a black hole.
This is how I feel about AI risk. In order to talk about what it might take to have a safe AI, we need a far more concrete definition than "Some sort of machine with whatever quality renders a machine uncontrollable".
26 notes
·
View notes
Google’s AI ambitions show promise- ‘if it doesn’t kill us’
Googles path to developing machine-learning tools illustrates the stark challenge that tech companies face in trying to make machines act like humans
Machines may yet take over the world, but first they must learn to recognize your dog.
To hear Google executives tell it at their annual developer conference this week, the technology industry is on the cusp of an artificial intelligence, or AI, revolution. Computers, without guidance, will be able to spot disease, engage humans in conversation and creatively outsmart world champions in competition. Such breakthroughs in machine learning have been the stuff of science fiction since Stanley Kubricks 1968 film 2001: A Space Odyssey.
Im incredibly excited about the progress were making, CEO Sundar Pichai told a crowd of 7,000 developers at Google I/O from an outdoor concert stage. Humans can achieve a lot more with the support of AI assisting them.
For better and worse, the companys near-term plans for the technology are more Office Space than Terminator. Think smartphones that can recognize pets in photos, appropriately respond to text messages, and find a window in your schedule where you should probably go to the gym. Googlers repeatedly boasted about how its computers could now automatically tag all of someones pictures with a pet.
Mario Klingemann, a self-described code artist, said he is using Googles machine-learning tools to have his computer make art for him by sorting through pictures on his computer and combining them to form new images.
All I have to do is sit back and let whatever it has created pass by and decide if I like it or not, Klingemann told the audience on Thursday night. In one of his pieces, called Run, Hipster. Run, Googles software had attached some fashionable leather boots to a hip bone.
It may seem like the latest example where Silicon Valley talks about changing society yet gives the world productivity apps. But it also illustrates the stark challenge that technology companies face in trying to make machines act like humans.
Itll be really, really small things that are just a bit more intuitive, said Patrick Fuentes, 34, a mobile developer for Nerdery in Minneapolis. He considered autocorrect on touchscreen keyboards a modern victory for machine learning. Referring to Skynet, the malicious computer network that turns against the human race in Terminator, Fuentes said: Were not there yet.
Mario Queiroz introduces Google Home during the Google I/O 2016 developers conference. Photograph: Stephen Lam/Reuters
Google is considered the sectors leader in artificial intelligence after it began pouring resources into the area about four years ago. During a three-day conference that took on the vibe of a music festival with outdoor merchandise and beer vendors, Pichai made clear he sees machine learning as his companys future.
He unveiled the new Google Assistant, a disembodied voice that will help users decide what movie to see, keep up with email, and control lights and music at home. After showing how Googles machines can now recognize many dogs, he explained how he wants to use the same image recognition technology to spot damage to the eyes caused by diabetes. He boasted that Googles AI software, AlphaGo, showed creativity when it beat a world champion at Go, the Korean board game considered more difficult than chess.
This might seem like an odd push for a firm that makes its money from cataloging the web and showing people ads. But the focus is part of a broader transition in the technology sector from helping consumers explore unlimited options online to telling them the best choice.
For instance, several developers gave the example of a smarter ways to predict what people are looking for online given their past interests.
If this guy likes sports and, I dont know, drinks, you should give him these suggestions, said Mikhail Ivashchenko, the chief technology officer of BeSmart in Kyrgyzstan. It will know exactly what youre looking for.
Unprompted, Ivashchenko said, its not quite Skynet. His nearby friend, David Renton, a recent computer science graduate from Galway, Ireland, then mused how it would be awesome if Google could eventually develop a Skynet equivalent. Think of the applications if it doesnt kill us, Renton said.
John Giannandrea, a Google vice-president of engineering who focuses on machine intelligence, said he wont declare victory until Googles software can read a text and naturally paraphrase it. Another challenge is that even the smartest machines these days have trouble transferring their knowledge from one activity to another.
For instance, AlphaGo, Googles software from the Go competition, wouldnt be able to apply its accumulated skills to chess or tic-tac-toe.
Still, Giannandrea said its hard not to get excited by recent gains in teaching computers how to recognize patterns in images.
The field is getting a little bit overhyped because of the progress were seeing, he said. Things that are hard for people to do we can teach computers to do. Things that are easy for people are hard for computers.
Of course, delegating even small decisions to machines has caused a flurry of discussions about the ethics of artificial intelligence. Several technology leaders, including Steven Hawking and Elon Musk, have called for more research on the social impact of artificial intelligence.
For instance, Klingemann, the code artist, said he is already contemplating whether he needs to change his title.
I have become more of a curator than a creator, he said.
Read more: www.theguardian.com
The post Google’s AI ambitions show promise- ‘if it doesn’t kill us’ appeared first on caredogstips.com.
from WordPress http://ift.tt/2sZTAXf
via IFTTT
0 notes