Tumgik
#and the ones that r closer to being done aren’t close enough to justify focusing on them
Text
God I’m at the point now where I’m desperate to just put out one of the fics I have in the works literally Any but none of them r close enough to being done that I can motivate myself to focus on them 😭😭😭
2 notes · View notes
we-future-first · 5 years
Text
The AGI Fallacy | A potential phenomenon where we assume new tech will either only come about with AGI or is unlikely without it. This misconception might cause a lack of preparedness for certain future technologies
We're probably familiar with the AI Effect, yes? The gist there is that we assume that a technology, accomplishment, or innovative idea [X] requires "true" artificial intelligence [Y], but once we actually accomplish [X] with [Y], [Y] is no longer [Y]. That might sound esoteric on the surface, but it's simple: once we do something new with AI, it's no longer called "AI". It's just a classifier, a tree search, a statistical gradient, a Boolean loop, an expert system, or something of that sort.
As a result, I've started translating "NAI" (narrow AI) as "Not AI" because that's what just about any and every narrow AI system is going to be.
It's possible there's a similar issue building with a fallacy that's closely related to (but is not quite) the AI Effect.
To explain my hypothesis: take [X] again. It's a Super Task that requires skills far beyond any ANI system today. In order to reliably accomplish [X], we need [Y]— artificial general intelligence. But here's the rub: most experts place the ETA of AGI at around 2045 at the earliest, with actual data scientists leaning much closer to the 2060s at the earliest, with more conservative estimates placing its creation into the 22nd century. [Z] is how many years away this is, and for simplicity's sake, let's presume that [Z] = 50 years.
To simplify: [X] requires [Y], but [Y] is [Z] years away. Therefore, [X] must also be [Z] years away, or at least it's close to it and accomplishing it heralds [Y].
But this isn't the case for almost everything done with AI thus far. As it turns out, a sufficiently advanced narrow AI system was capable of doing things that past researchers were doggedly sure could only be done with general AI. Chess, for example: it was once assumed that since only the highest intellectual minds could master chess, an AI would need to be virtually alive to do the same.
Garry Kasparov was trumped in the mid-90s, and I have a distinct inkling that the Singularity might not have begun back then too, but I need more evidence to confirm this.
Things like Watson defeating a human on Jeopardy or an AI crushing humans at Go were in similar places in complexity: if an AI could do that, we're probably close to the Singularity. Both happened earlier this decade, and no human-level AGI has assumed total control over our nuclear launch codes since. If they have, they're doing a damn fine job not telling us, but there may be a tiny chance that AGI doesn't exist and these tasks were perfectly accomplishable by non-AGI due to focusing on specialization of certain tasks— which has the added corollary of saying that many aspects of human cognition we assume can only be mimicked by a full-fledged mind can indeed be reduced to a much simpler and narrower form.
Of course, there genuinely are some tasks that require AI more complicated than ANI. Autonomous cars are one. Sure, their narrow goal is "driving", but it turns out that's actually a very general goal when you really think about it because you have to account for, predict, and react to so many different stimuli at one time. Therefore, autonomous cars are only happening when we have AGI, right?
Well...
So, for the past few years, I've been trying to get people to listen to my explanation that our model for AI types has a gaping hole in it. We only have three types as of the present: ANI or NAI (narrow AI/not AI that can only do one thing), AGI (general AI, which can do everything), and ASI (artificial superintelligence, which can do everything and then some at a bizarro superhuman level). But ever since roughly around 2015 or so, I started asking myself: "what about AI that can do some things but not everything?" That is, it might be specialized for one specific class of tasks, but it can do many or all of the subtasks within that class. Or, perhaps more simply, it's generalized across a cluster of tasks and capabilities but isn't general AI. It seems so obvious to me that this is the next step in AI, and we even have networks that do this: transformers, for example, specialize in natural-language generation, but from text synthesis you can also do rudimentary images or organize MIDI files; even with just pure text synthesis, you can generate anything from poems to scripts and everything in between. Normally, you'd need an ANI that specialize for each one of those tasks, and it's true that most transformers right now are trained to do one specifically. But as long as they generate character data, they can theoretically generate more than just words.
This isn't "proto-AGI" or anything close; if anything, it's closer to ANI. But it isn't ANI; it's too generalized to be ANI.
The gist there is that this proves to me that it's possible for AI to do narrowly-generalized tasks and, thus, be far stronger than any narrow AI network that exists today even if it's still weaker than any theoretical future AGI. This is a bridge from here to there, and we've all but started crossing it in the past couple of years.
The term I've coined for that kind of AI is "AXI" or "artificial expert intelligence" (not to be confused with expert systems). It makes sense in theory: an expert is one who specializes in a particular field rather than a worker who does one singular task or a polymath who knows everything. It's certainly better than "proto-AGI" because many will latch onto the AGI part of the name to discredit these sorts of technologies, and even then it really isn't proto-AGI anyway.
This has some implications for this "AGI Effect", if I may be able to coin another term. If we believe things like synthesizing a 5-minute photorealistic video with audio requires AGI, then we can comfortably say that this is 50+ years away and not have to worry about it. But if a suitably strong AXI does it in only five years, then we may have a problem: by assuming that [X] is 50 years away, we compartmentalize it in the same place as things like our grandchildren going to college, distant future effects of climate change or astronomic events, science fiction, and our own deaths. This is fairly low on our internal list of concerns. If it's only five years away, it becomes a much more immediate concern and we're more apt to do something about it or at least think through how we might deal with it.
This is why there's little being done about climate change: even some of the most dire predictions still place the start of the worst effects decades in the future, which reduces our own responsibility to do or care about anything, despite the fact certain effects could start much sooner by unforeseen events.
It can be used to justify skepticism of any sort of change, too. The AGI Effect explains why people tend to think automation is decades away. For one, we tend to think of automation as "humanoid robots taking jobs from men in blue overalls and hardhats, burger flippers, and truck drivers" and because humanoid robots are still rather pathetic (despite the fact they can backflip and freerun now), we can comfortably say "jobs aren't going away any time soon."
I mean, for one: media synthesis is a thing, and the basic principle there is that disembodied neural networks can automate any data-oriented task (including the entertainment industry, news industry, and many white collar office tasks) as long as it has enough power, and that might start hitting as soon as the 2020s. Of course, there are also predictions there that say that "we need AGI to get an NLG program to write a novel" or "we need AGI to generate a 5-minute animation," and yet both tasks seem like they may be accomplished within just a few, zero-AGI-filled years. And autonomous robots don't need to be entirely generalized to be better than humans; they just need to be able to handle unexpected issues. If you have strong enough ANI and AXI to handle vision, spatial awareness, and prediction, you could conceivably get a general-purpose robot worker. And this might only take 10 to 15 years as opposed to 50+.
Sure, the robot with AGI is going to be better than the one with a bunch of less generalized networks, but it's not like we can only make these robots with AGI in the first place. And I think we're going to see what I mean in very short order.
I think autonomous trucks, for example, can be done with sufficiently powerful ANI. If not ANI, then certainly AXI.
The cold fact is that most of the faculties of our mind can be reduced to certain algorithms; the issue is and has always been replicating the whole mind. And I'm merely saying that, in order to get to our sci-fi future, we don't actually need to do that (though it would greatly help).
TLDR: There's a fallacy where we assume that we need AGI to do something, when a sufficiently advanced narrow AI (or narrowly-generalized AI) will do that same thing much sooner. Since we don't expect it to be done so soon, we don't prepare for it properly.
If I'm wrong, please correct me.
submitted by /u/Yuli-Ban [link] [comments] source https://www.reddit.com/r/Futurology/comments/e23njq/the_agi_fallacy_a_potential_phenomenon_where_we/
0 notes