Tumgik
#I think I can agree with the ai folks in that the entity makes no sense
kindlythevoid · 1 year
Text
M:I Dead Reckoning Spoilers
This is your last warning.
OKAY so on the one hand, I cannot believe they killed off Ilsa!! Did they do a good job of foreshadowing it? Yes, I saw it coming a mile away even before they got to the bridge. However, (in my opinion) it was a disappointing move.
I was so down for Hayley Atwell to be in this movie. In fact, I still really love her character + character development over the course of the movie!! I think she was well fleshed out for her first movie!! But just because they introduced a new character (and dare I even say… love interest???) does not mean they can kill off the only other female on the team. They one-hundred-percent fridged her for the sake of Ethan’s dilemma later on in the movie, and that was after they already set up the first fridge in the beginning of the movie to show how angry/scared he was of this guy.
Furthermore, they killed her off to perpetuate Ethan’s tragic love life. We know that before Ethan joined the IMF, his SO was killed by Gabriel. In the first movie, the woman he was seeing on his team is killed with the rest of them in the beginning of the movie. We never hear from the thief (Ms. Hall?) after the second. Julia we know fakes her death and becomes a ghost after the third movie. And then, in Fallout, it seems like Ethan’s finally given permission (by Julia or himself or the circumstances, take your pick) to move on and be with Ilsa. Only for it to all fall apart in the very next movie!!
There was no reason to kill off Ilsa. It was bad writing, especially after hinting at Grace being his next love interest and killing off Ilsa because of her. There was no reason for Grace not to get along with everyone as friends in the IMF or as a contact or what have you and still have Ilsa as Ethan’s SO as well as a rogue agent/in the IMF/ghost contact.
That being said.
Now that I have finished blaming the writers, I must congratulate them in the way they handled Ilsa’s death. I mentioned earlier that I was able to see Ilsa’s death foreshadowed in the cinematography and the overall feel and narrative of the movie. Faking her death in the beginning of the movie to prime the audience for her real death later, having her face blown up on the tv screen behind the director when he tells Ethan that continuing this path will cost him, and the overly blatant call out of the entity that she was closer to the bridge than Ethan.
Concerning the bridge itself, I feel like it was somewhat ridiculous to have Ilsa carry a sword over a gun, but those feelings aside the sword v knife fight was freaking awesome. They did do her (as well as Grace before) justice in the fight against Gabriel in the fact that visually they were really cool looking fights and they weren’t taken out in two swings.
And on the matter of Grace’s presence at the bridge, I have to say it was very in character for Ilsa to pursue Gabriel in an effort to save Grace. From our first meeting with Ilsa, she was always willing to put an ally’s life before her own (just as Ethan is so prone to doing) and so character-wise, it makes sense for Ilsa to go in without backup, without waiting.
Am I still mad that she died? Absolutely! I could also talk about their usage and subsequent disposal of Paris (a really awesome and fun character; tbh I am not totally sure that’s her name), but right now I wanted to focus on Ilsa’s death as Paris’s role in the movie requires her own pros and cons list. However, I can say that I (personally) think that it was handled much better than some other movies deal with their characters deaths. Overall, it was a fantastic movie and I will be watching part two next year to see how they round off this arc.
21 notes · View notes
nearshoplive · 2 months
Link
0 notes
dailynewswebsite · 4 years
Text
If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs
What do you owe a devoted android like Knowledge? CBS
Within the “Star Trek: The Subsequent Technology” episode “The Measure of a Man,” Knowledge, an android crew member of the Enterprise, is to be dismantled for analysis functions until Captain Picard can argue that Knowledge deserves the identical rights as a human being. Naturally the query arises: What’s the foundation upon which one thing has rights? What offers an entity ethical standing?
The thinker Peter Singer argues that creatures that may really feel ache or undergo have a declare to ethical standing. He argues that nonhuman animals have ethical standing, since they will really feel ache and undergo. Limiting it to folks could be a type of speciesism, one thing akin to racism and sexism.
With out endorsing Singer’s line of reasoning, we would marvel if it may be prolonged additional to an android robotic like Knowledge. It could require that Knowledge can both really feel ache or undergo. And the way you reply that will depend on the way you perceive consciousness and intelligence.
As actual synthetic intelligence expertise advances towards Hollywood’s imagined variations, the query of ethical standing grows extra essential. If AIs have ethical standing, philosophers like me cause, it may comply with that they’ve a proper to life. Which means you can’t merely dismantle them, and may also imply that folks shouldn’t intervene with their pursuing their targets.
Tumblr media
Garry Kasparov was overwhelmed by Deep Blue, an AI with a really deep intelligence in a single slim area of interest. Stan Honda/AFP by way of Getty Photographs
Two flavors of intelligence and a take a look at
IBM’s Deep Blue chess machine was efficiently skilled to beat grandmaster Gary Kasparov. But it surely couldn’t do anything. This pc had what’s referred to as domain-specific intelligence.
However, there’s the form of intelligence that enables for the power to do a wide range of issues effectively. It’s referred to as domain-general intelligence. It’s what lets folks prepare dinner, ski and lift youngsters – duties which might be associated, but additionally very completely different.
Synthetic normal intelligence, AGI, is the time period for machines which have domain-general intelligence. Arguably no machine has but demonstrated that form of intelligence. This summer season, a startup referred to as OPENAI launched a brand new model of its Generative Pre-Coaching language mannequin. GPT-Three is a natural-language-processing system, skilled to learn and write in order that it may be simply understood by folks.
It drew instant discover, not simply due to its spectacular means to imitate stylistic prospers and put collectively believable content material, but additionally due to how far it had come from a earlier model. Regardless of this spectacular efficiency, GPT-Three doesn’t truly know something past easy methods to string phrases collectively in numerous methods. AGI stays fairly far off.
Named after pioneering AI researcher Alan Turing, the Turing take a look at helps decide when an AI is clever. Can an individual conversing with a hidden AI inform whether or not it’s an AI or a human being? If he can’t, then for all sensible functions, the AI is clever. However this take a look at says nothing about whether or not the AI is perhaps aware.
Two sorts of consciousness
There are two elements to consciousness. First, there’s the what-it’s-like-for-me facet of an expertise, the sensory a part of consciousness. Philosophers name this phenomenal consciousness. It’s about the way you expertise a phenomenon, like smelling a rose or feeling ache.
In distinction, there’s additionally entry consciousness. That’s the power to report, cause, behave and act in a coordinated and responsive method to stimuli based mostly on targets. For instance, once I cross the soccer ball to my good friend making a play on the purpose, I’m responding to visible stimuli, performing from prior coaching, and pursuing a purpose decided by the foundations of the sport. I make the cross robotically, with out aware deliberation, within the move of the sport.
Blindsight properly illustrates the distinction between the 2 sorts of consciousness. Somebody with this neurological situation would possibly report, for instance, that they can not see something within the left facet of their visible subject. But when requested to choose up a pen from an array of objects within the left facet of their visible subject, they will reliably accomplish that. They can not see the pen, but they will choose it up when prompted – an instance of entry consciousness with out phenomenal consciousness.
Knowledge is an android. How do these distinctions play out with respect to him?
Tumblr media
Do Knowledge’s qualities grant him ethical standing? CBS
The Knowledge dilemma
The android Knowledge demonstrates that he’s self-aware in that he can monitor whether or not or not, for instance, he’s optimally charged or there’s inside injury to his robotic arm.
Knowledge can also be clever within the normal sense. He does a whole lot of distinct issues at a excessive degree of mastery. He can fly the Enterprise, take orders from Captain Picard and cause with him about the most effective path to take.
He can even play poker along with his shipmates, prepare dinner, focus on topical points with shut associates, battle with enemies on alien planets and interact in numerous types of bodily labor. Knowledge has entry consciousness. He would clearly cross the Turing take a look at.
Nonetheless, Knowledge most probably lacks phenomenal consciousness – he doesn’t, for instance, delight within the scent of roses or expertise ache. He embodies a supersized model of blindsight. He’s self-aware and has entry consciousness – can seize the pen – however throughout all his senses he lacks phenomenal consciousness.
Now, if Knowledge doesn’t really feel ache, not less than one of many causes Singer affords for giving a creature ethical standing isn’t fulfilled. However Knowledge would possibly fulfill the opposite situation of having the ability to undergo, even with out feeling ache. Struggling may not require phenomenal consciousness the best way ache primarily does.
For instance, what if struggling had been additionally outlined as the thought of being thwarted from pursuing a simply trigger with out inflicting hurt to others? Suppose Knowledge’s purpose is to avoid wasting his crewmate, however he can’t attain her due to injury to one in all his limbs. Knowledge’s discount in functioning that retains him from saving his crewmate is a form of nonphenomenal struggling. He would have most popular to avoid wasting the crewmate, and could be higher off if he did.
Within the episode, the query finally ends up resting not on whether or not Knowledge is self-aware – that’s not unsure. Neither is it in query whether or not he’s clever – he simply demonstrates that he’s within the normal sense. What’s unclear is whether or not he’s phenomenally aware. Knowledge isn’t dismantled as a result of, in the long run, his human judges can not agree on the importance of consciousness for ethical standing.
Tumblr media
When the 1s and 0s add as much as an ethical being. ktsimage/iStock by way of Getty Photographs Plus
Ought to an AI get ethical standing?
Knowledge is sort – he acts to assist the well-being of his crewmates and people he encounters on alien planets. He obeys orders from folks and seems unlikely to hurt them, and he appears to guard his personal existence. For these causes he seems peaceable and simpler to simply accept into the realm of issues which have ethical standing.
However what about Skynet within the “Terminator” films? Or the troubles lately expressed by Elon Musk about AI being extra harmful than nukes, and by Stephen Hawking on AI ending humankind?
[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]
Human beings don’t lose their declare to ethical standing simply because they act towards the pursuits of one other individual. In the identical means, you possibly can’t robotically say that simply because an AI acts towards the pursuits of humanity or one other AI it doesn’t have ethical standing. You is perhaps justified in combating again towards an AI like Skynet, however that doesn’t take away its ethical standing. If ethical standing is given in advantage of the capability to nonphenomenally undergo, then Skynet and Knowledge each get it even when solely Knowledge needs to assist human beings.
There aren’t any synthetic normal intelligence machines but. However now’s the time to think about what it might take to grant them ethical standing. How humanity chooses to reply the query of ethical standing for nonbiological creatures could have massive implications for the way we cope with future AIs – whether or not sort and useful like Knowledge, or set on destruction, like Skynet.
Tumblr media
Anand Vaidya doesn’t work for, seek the advice of, personal shares in or obtain funding from any firm or group that will profit from this text, and has disclosed no related affiliations past their educational appointment.
from Growth News https://growthnews.in/if-a-robot-is-conscious-is-it-ok-to-turn-it-off-the-moral-implications-of-building-true-ais/ via https://growthnews.in
1 note · View note
0100100100101101 · 7 years
Link
I’ve heard that in the future computerized AIs will become so much smarter than us that they will take all our jobs and resources, and humans will go extinct. Is this true?
That’s the most common question I get whenever I give a talk about AI. The questioners are earnest; their worry stems in part from some experts who are asking themselves the same thing. These folks are some of the smartest people alive today, such as Stephen Hawking, Elon Musk, Max Tegmark, Sam Harris, and Bill Gates, and they believe this scenario very likely could be true. Recently at a conference convened to discuss these AI issues, a panel of nine of the most informed gurus on AI all agreed this superhuman intelligence was inevitable and not far away.
Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:
Artificial intelligence is already getting smarter than us, at an exponential rate.
We’ll make AIs into a general purpose intelligence, like our own.
We can make human intelligence in silicon.
Intelligence can be expanded without limit.
Once we have exploding superintelligence it can solve most of our problems.
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.
Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
Humans do not have general purpose minds, and neither will AIs.
Emulation of human thinking in other media will be constrained by cost.
Dimensions of intelligence are not infinite.
Intelligences are only one factor in progress.
If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief — a myth. In the following paragraphs I expand my evidence for each of these five counter-assumptions, and make the case that, indeed, a superhuman AI is a kind of myth.
1.
The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.
This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.
The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.
A more accurate chart of the natural evolution of species is a disk radiating outward, like this one (above) first devised by David Hillis at the University of Texas and based on DNA. This deep genealogy mandala begins in the middle with the most primeval life forms, and then branches outward in time. Time moves outward so that the most recent species of life living on the planet today form the perimeter of the circumference of this circle. This picture emphasizes a fundamental fact of evolution that is hard to appreciate: Every species alive today is equally evolved. Humans exist on this outer ring alongside cockroaches, clams, ferns, foxes, and bacteria. Every one of these species has undergone an unbroken chain of three billion years of successful reproduction, which means that bacteria and cockroaches today are as highly evolved as humans. There is no ladder.
Likewise, there is no ladder of intelligence. Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence. If intelligence were a single dimension we should be able to arrange the intelligences of a parrot, a dolphin, a horse, a squirrel, an octopus, a blue whale, a cat, and a gorilla in the correct ascending order in a line. We currently have no scientific evidence of such a line. One reason might be that there is no difference between animal intelligences, but we don’t see that either. Zoology is full of remarkable differences in how animals think. But maybe they all have the same relative “general intelligence?” It could be, but we have no measurement, no single metric for that intelligence. Instead we have many different metrics for many different types of cognition.
Instead of a single decibel line, a more accurate model for intelligence is to chart its possibility space, like the above rendering of possible forms created by an algorithm written by Richard Dawkins. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent, and co-created.
Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition. We don’t really think with just our brain; rather, we think with our whole bodies.
These suites of cognition vary between individuals and between species. A squirrel can remember the exact location of several thousand acorns for years, a feat that blows human minds away. So in that one type of cognition, squirrels exceed humans. That superpower is bundled with some other modes that are dim compared to ours in order to produce a squirrel mind. There are many other specific feats of cognition in the animal kingdom that are superior to humans, again bundled into different systems.
Likewise in AI. Artificial minds already exceed humans in certain dimensions. Your calculator is a genius in math; Google’s memory is already beyond our own in a certain dimension. We are engineering AIs to excel in specific modes. Some of these modes are things we can do, but they can do better, such as probability or math. Others are type of thinking we can’t do at all — memorize every single word on six billion web pages, a feat any search engine can do. In the future, we will invent whole new modes of cognition that don’t exist in us and don’t exist anywhere in biology. When we invented artificial flying we were inspired by biological modes of flying, primarily flapping wings. But the flying we invented — propellers bolted to a wide fixed wing — was a new mode of flying unknown in our biological world. It is alien flying. Similarly, we will invent whole new modes of thinking that do not exist in nature. In many cases they will be new, narrow, “small,” specific modes for specific jobs — perhaps a type of reasoning only useful in statistics and probability.
In other cases the new mind will be complex types of cognition that we can use to solve problems our intelligence alone cannot. Some of the hardest problems in business and science may require a two-step solution. Step one is: Invent a new mode of thought to work with our minds. Step two: Combine to solve the problem. Because we are solving problems we could not solve before, we want to call this cognition “smarter” than us, but really it is different than us. It’s the differences in thinking that are the main benefits of AI. I think a useful model of AI is to think of it as alien intelligence (or artificial aliens). Its alienness will be its chief asset.
At the same time we will integrate these various modes of cognition into more complicated, complex societies of mind. Some of these complexes will be more complex than us, and because they will be able to solve problems we can’t, some will want to call them superhuman. But we don’t call Google a superhuman AI even though its memory is beyond us, because there are many things we can do better than it. These complexes of artificial intelligences will for sure be able to exceed us in many dimensions, but no one entity will do all we do better. It’s similar to the physical powers of humans. The industrial revolution is 200 years old, and while all machines as a class can beat the physical achievements of an individual human (speed of running, weight lifting, precision cutting, etc.), there is no one machine that can beat an average human in everything he or she does.
Even as the society of minds in an AI become more complex, that complexity is hard to measure scientifically at the moment. We don’t have good operational metrics of complexity that could determine whether a cucumber is more complex than a Boeing 747, or the ways their complexity might differ. That is one of the reasons why we don’t have good metrics for smartness as well. It will become very difficult to ascertain whether mind A is more complex than mind B, and for the same reason to declare whether mind A is smarter than mind B. We will soon arrive at the obvious realization that “smartness” is not a single dimension, and that what we really care about are the many other ways in which intelligence operates — all the other nodes of cognition we have not yet discovered.
2.
The second misconception about human intelligence is our belief that we have a general purpose intelligence. This repeated belief influences a commonly stated goal of AI researchers to create an artificial general purpose intelligence (AGI). However, if we view intelligence as providing a large possibility space, there is no general purpose state. Human intelligence is not in some central position, with other specialized intelligence revolving around it. Rather, human intelligence is a very, very specific type of intelligence that has evolved over many millions of years to enable our species to survive on this planet. Mapped in the space of all possible intelligences, a human-type of intelligence will be stuck in the corner somewhere, just as our world is stuck at the edge of vast galaxy.
We can certainly imagine, and even invent, a Swiss-army knife type of thinking. It kind of does a bunch of things okay, but none of them very well. AIs will follow the same engineering maxim that all things made or born must follow: You cannot optimize every dimension. You can only have tradeoffs. You can’t have a general multi-purpose unit outperform specialized functions. A big “do everything” mind can’t do everything as well as those things done by specialized agents. Because we believe our human minds are general purpose, we tend to believe that cognition does not follow the engineer’s tradeoff, that it will be possible to build an intelligence that maximizes all modes of thinking. But I see no evidence of that. We simply haven’t invented enough varieties of minds to see the full space (and so far we have tended to dismiss animal minds as a singular type with variable amplitude on a single dimension.)
3.
Part of this belief in maximum general-purpose thinking comes from the concept of universal computation. Formally described as the Church-Turing hypothesis in 1950, this conjecture states that all computation that meets a certain threshold is equivalent. Therefore there is a universal core to all computation, whether it occurs in one machine with many fast parts, or slow parts, or even if it occurs in a biological brain, it is the same logical process. Which means that you should be able to emulate any computational process (thinking) in any machine that can do “universal” computation. Singularitans rely on this principle for their expectation that we will be able to engineer silicon brains to hold human minds, and that we can make artificial minds that think like humans, only much smarter. We should be skeptical of this hope because it relies on a misunderstanding of the Church-Turing hypothesis.
The starting point of the theory is: “Given infinite tape [memory] and time, all computation is equivalent.” The problem is that in reality, no computer has infinite memory or time. When you are operating in the real world, real time makes a huge difference, often a life-or-death difference. Yes, all thinking is equivalent if you ignore time. Yes, you can emulate human-type thinking in any matrix you want, as long as you ignore time or the real-life constraints of storage and memory. However, if you incorporate time, then you have to restate the principal in a significant way: Two computing systems operating on vastly different platforms won’t be equivalent in real time. That can be restated again as: The only way to have equivalent modes of thinking is to run them on equivalent substrates. The physical matter you run your computation on — particularly as it gets more complex — greatly influences the type of cognition that can be done well in real time.
I will extend that further to claim that the only way to get a very human-like thought process is to run the computation on very human-like wet tissue. That also means that very big, complex artificial intelligences run on dry silicon will produce big, complex, unhuman-like minds. If it would be possible to build artificial wet brains using human-like grown neurons, my prediction is that their thought will be more similar to ours. The benefits of such a wet brain are proportional to how similar we make the substrate. The costs of creating wetware is huge and the closer that tissue is to human brain tissue, the more cost-efficient it is to just make a human. After all, making a human is something we can do in nine months.
Furthermore, as mentioned above, we think with our whole bodies, not just with our minds. We have plenty of data showing how our gut’s nervous system guides our “rational” decision-making processes, and can predict and learn. The more we model the entire human body system, the closer we get to replicating it. An intelligence running on a very different body (in dry silicon instead of wet carbon) would think differently.
I don’t see that as a bug but rather as a feature. As I argue in point 2, thinking differently from humans is AI’s chief asset. This is yet another reason why calling it “smarter than humans” is misleading and misguided.
4.
At the core of the notion of a superhuman intelligence — particularly the view that this intelligence will keep improving itself — is the essential belief that intelligence has an infinite scale. I find no evidence for this. Again, mistaking intelligence as a single dimension helps this belief, but we should understand it as a belief. There is no other physical dimension in the universe that is infinite, as far as science knows so far. Temperature is not infinite — there is finite cold and finite heat. There is finite space and time. Finite speed. Perhaps the mathematical number line is infinite, but all other physical attributes are finite. It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?
A much better way to think about this is to see our intelligence as one of a million types of possible intelligences. So while each dimension of cognition and computation has a limit, if there are hundreds of dimensions, then there are uncountable varieties of mind — none of them infinite in any dimension. As we build or encounter these uncountable varieties of mind we might naturally think of some of them as exceeding us. In my recent book The Inevitable, I sketched out some of that variety of minds that were superior to us in some way. 
Some folks today may want to call each of these entities a superhuman AI, but the sheer variety and alienness of these minds will steer us to new vocabularies and insights about intelligence and smartness.
Second, believers of Superhuman AI assume intelligence will increase exponentially (in some unidentified single metric), probably because they also assume it is already expanding exponentially. However, there is zero evidence so far that intelligence — no matter how you measure it — is increasing exponentially. By exponential growth I mean that artificial intelligence doubles in power on some regular interval. Where is that evidence? Nowhere I can find. If there is none now, why do we assume it will happen soon? The only thing expanding on an exponential curve are the inputs in AI, the resources devoted to producing the smartness or intelligences. But the output performance is not on a Moore’s law rise. AIs are not getting twice as smart every 3 years, or even every 10 years.
I asked a lot of AI experts for evidence that intelligence performance is on an exponential gain, but all agreed we don’t have metrics for intelligence, and besides, it wasn’t working that way. When I asked Ray Kurzweil, the exponential wizard himself, where the evidence for exponential AI was, he wrote to me that AI does not increase explosively but rather by levels. He said: “It takes an exponential improvement both in computation and algorithmic complexity to add each additional level to the hierarchy…. So we can expect to add levels linearly because it requires exponentially more complexity to add each additional layer, and we are indeed making exponential progress in our ability to do this. We are not that many levels away from being comparable to what the neocortex can do, so my 2029 date continues to look comfortable to me.”
What Ray seems to be saying is that it is not that the power of artificial intelligence is exploding exponentially, but that the effort to produce it is exploding exponentially, while the output is merely raising a level at a time. This is almost the opposite of the assumption that intelligence is exploding. This could change at some time in the future, but artificial intelligence is clearly not increasing exponentially now.
Therefore when we imagine an “intelligence explosion,” we should imagine it not as a cascading boom but rather as a scattering exfoliation of new varieties. A Cambrian explosion rather than a nuclear explosion. The results of accelerating technology will most likely not be super-human, but extra-human. Outside of our experience, but not necessarily “above” it.
5.
Another unchallenged belief of a super AI takeover, with little evidence, is that a super, near-infinite intelligence can quickly solve our major unsolved problems.
Many proponents of an explosion of intelligence expect it will produce an explosion of progress. I call this mythical belief “thinkism.” It’s the fallacy that future levels of progress are only hindered by a lack of thinking power, or intelligence. (I might also note that the belief that thinking is the magic super ingredient to a cure-all is held by a lot of guys who like to think.)
Let’s take curing cancer or prolonging longevity. These are problems that thinking alone cannot solve. No amount of thinkism will discover how the cell ages, or how telomeres fall off. No intelligence, no matter how super duper, can figure out how the human body works simply by reading all the known scientific literature in the world today and then contemplating it. No super AI can simply think about all the current and past nuclear fission experiments and then come up with working nuclear fusion in a day. A lot more than just thinking is needed to move between not knowing how things work and knowing how they work. There are tons of experiments in the real world, each of which yields tons and tons of contradictory data, requiring further experiments that will be required to form the correct working hypothesis. Thinking about the potential data will not yield the correct data.
Thinking (intelligence) is only part of science; maybe even a small part. As one example, we don’t have enough proper data to come close to solving the death problem. In the case of working with living organisms, most of these experiments take calendar time. The slow metabolism of a cell cannot be sped up. They take years, or months, or at least days, to get results. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.
There is no doubt that a super AI can accelerate the process of science. We can make computer simulations of atoms or cells and we can keep speeding them up by many factors, but two issues limit the usefulness of simulations in obtaining instant progress. First, simulations and models can only be faster than their subjects because they leave something out. That is the nature of a model or simulation. Also worth noting: The testing, vetting and proving of those models also has to take place in calendar time to match the rate of their subjects. The testing of ground truth can’t be sped up.
These simplified versions in a simulation are useful in winnowing down the most promising paths, so they can accelerate progress. But there is no excess in reality; everything real makes a difference to some extent; that is one definition of reality. As models and simulations are beefed up with more and more detail, they come up against the limit that reality runs faster than a 100 percent complete simulation of it. That is another definition of reality: the fastest possible version of all the details and degrees of freedom present. If you were able to model all the molecules in a cell and all the cells in a human body, this simulation would not run as fast as a human body. No matter how much you thought about it, you still need to take time to do experiments, whether in real systems or in simulated systems.
To be useful, artificial intelligences have to be embodied in the world, and that world will often set their pace of innovations. Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. There won’t be instant discoveries the minute, hour, day or year a so-called “smarter-than-human” AI appears. Certainly the rate of discovery will be significantly accelerated by AI advances, in part because alien-ish AI will ask questions no human would ask, but even a vastly powerful (compared to us) intelligence doesn’t mean instant progress. Problems need far more than just intelligence to be solved.
Not only are cancer and longevity problems that intelligence alone can’t solve, so is intelligence itself. The common trope among Singularitans is that once you make an AI “smarter than humans” then all of sudden it thinks hard and invents an AI “smarter than itself,” which thinks harder and invents one yet smarter, until it explodes in power, almost becoming godlike. We have no evidence that merely thinking about intelligence is enough to create new levels of intelligence. This kind of thinkism is a belief. We have a lot of evidence that in addition to great quantities of intelligence we need experiments, data, trial and error, weird lines of questioning, and all kinds of things beyond smartness to invent new kinds of successful minds.
I’d conclude by saying that I could be wrong about these claims. We are in the early days. We might discover a universal metric for intelligence; we might discover it is infinite in all directions. Because we know so little about what intelligence is (let alone consciousness), the possibility of some kind of AI singularity is greater than zero. I think all the evidence suggests that such a scenario is highly unlikely, but it is greater than zero.
So while I disagree on its probability, I am in agreement with the wider aims of OpenAI and the smart people who worry about a superhuman AI — that we should engineer friendly AIs and figure out how to instill self-replicating values that match ours. Though I think a superhuman AI is a remote possible existential threat (and worthy of considering), I think its unlikeliness (based on the evidence we have so far) should not be the guide for our science, policies, and development. An asteroid strike on the Earth would be catastrophic. Its probability is greater than zero (and so we should support the B612 Foundation), but we shouldn’t let the possibility of an asteroid strike govern our efforts in, say, climate change, or space travel, or even city planning.
Likewise, the evidence so far suggests AIs most likely won’t be superhuman but will be many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash. Instead there will be a galaxy of finite intelligences, working in unfamiliar dimensions, exceeding our thinking in many of them, working together with us in time to solve existing problems and create new problems.
I understand the beautiful attraction of a superhuman AI god. It’s like a new Superman. But like Superman, it is a mythical figure. Somewhere in the universe a Superman might exist, but he is very unlikely. However myths can be useful, and once invented they won’t go away. The idea of a Superman will never die. The idea of a superhuman AI Singularity, now that it has been birthed, will never go away either. But we should recognize that it is a religious idea at this moment and not a scientific one. If we inspect the evidence we have so far about intelligence, artificial and natural, we can only conclude that our speculations about a mythical superhuman AI god are just that: myths.
Many isolated islands in Micronesia made their first contact with the outside world during World War II. Alien gods flew over their skies in noisy birds, dropped food and goods on their islands, and never returned. Religious cults sprang up on the islands praying to the gods to return and drop more cargo. Even now, fifty years later, many still wait for the cargo to return. It is possible that superhuman AI could turn out to be another cargo cult. A century from now, people may look back to this time as the moment when believers began to expect a superhuman AI to appear at any moment and deliver them goods of unimaginable value. Decade after decade they wait for the superhuman AI to appear, certain that it must arrive soon with its cargo.
Yet non-superhuman artificial intelligence is already here, for real. We keep redefining it, increasing its difficulty, which imprisons it in the future, but in the wider sense of alien intelligences — of a continuous spectrum of various smartness, intelligences, cognition, reasonings, learning, and consciousness — AI is already pervasive on this planet and will continue to spread, deepen, diversify, and amplify. No invention before will match its power to change our world, and by century’s end AI will touch and remake everything in our lives. Still the myth of a superhuman AI, poised to either gift us super-abundance or smite us into super-slavery (or both), will probably remain alive—a possibility too mythical to dismiss.
23 notes · View notes
peter-ino-salinas · 5 years
Text
The Rise & Fall of Nerd Kingdom
First, let me say, while I adore and appreciate so much the underground movement and support of the select few who have kept Nerd Kingdom alive in your own ways, as well as our work, thank you. In so many ways you will never know those few of you have kept me going on my own path today.
Second, anyone who was involved who is looked at negatively, you shouldn’t, they are all brilliant people, but brilliant in their own regard, we are human, people had the best intentions, but sometimes values can change and shift when elements are introduced that people do not understand, for better or worse.
If any of you are reading this who were involved in any politics with Nerd Kingdom in any form, know that I always look back and appreciate the things I learned from all of you. Even if I did not always agree, I could have done better to listen and understand. In spite of my own best intentions, I lost myself with so many divided agendas and for that I am sorry. One day I genuinely hope we can be colleagues again, in whatever form that may be.
Genuine Intentions
The earliest forms of Nerd Kingdom were not EXACTLY what it was at the time, but the motivation was quite the same. It was based on some VERY old academic work and mutual interest of my colleagues in various fields. It was very simple, young minds in a social environment with the correct tools have the capacity to advance upon and outmaneuver larger entities which are subject to walls they engineer themselves into. Neat.
In not so technical speak, it was about Modders and Gamers. That was is. We had a quantifiable understanding (creepy topic, different post) that young minds who were not beaten into accepting conventions would define worlds. As it happened, the worlds we cared about were games for all the nerdiest and still most scientific reasons. Early on we had a variety of ways to access data or work with modders, but it wasn't the same as what happened when we saw Minecraft VERY early.
Comically the way we raised money in the first place wasn’t even intentional, we were just a bunch of outcast nerds who were not always understood in Academia or “Business” and we called out some truths about it before it happened. It was simple “The slow nature of content creation in Minecraft will inevitably lead to someone decompiling Java, building various versions and we can correlate that to the rise and fall of any industry or business. And low and behold, what we predicted happened.
To be clear, while we had a more technical articulation, at our core, we were also very much like many modders and gamers, we were passionate hackers who were ignored by masses and still did things that were valuable to people, even if very often we were taken advantage of. Admittedly, some of us had a chip on our shoulder early, but we hid it well, myself included. Assuredly, that chip is now gone, I have you all and my own children to thank for that to force me to see who I was and people around me.
ALL THE MONIES!
We were asked what we would do if we had access to money given that we just proved MAGIC (which was sadly obvious from our views of things) and the natural response was “Create a platform to empower people to learn and play and connect while also identifying measurable patterns of interactions to determine how to teach people how to solve problems and turn them into engineers or better creatives”. As you can imagine, this was confusing. We took some advice from some “Business Folks” we met and this was awesome and horrible down the road, but one of our first curses, which was what we had to do.
“It's like Minecraft with Better Graphics and we have PhDs and stuff”
As you can imagine. All the monies. Its something a lot of us had a conflict with, it was never about MINECRAFT, it was about an environment to play and understand. The reason we actually built our own engine was that the engines in the game industry, even now, are not REALLY all that great at empowering creation, its an inherent flaw in their architecture. So we had to go back to basics... SIMULATION! It's all nerd talk, but it's simple. A simulation engine or simulation at all is simply a single structure of code that allows access to all parts of it. Wanna know what else was a simulation engine, sorta? Minecraft Java. It's not really... well, it's slower than we liked, so we built our own. Several times over in fact, and each time we managed to benchmark performance of various areas that would actually outperform commercial engines, even now.
This resulted in some strange divides, however, this whole “Minecraft killer” thing, gave me anxiety EVERY day and I did my best to try and curb that from a market perspective, but I am sure many people saw it as a sales guise. It wasn't, but I get it, this is what “Games” and “Business” does, so I cannot even be mad about it. But as a result we actually became amazing friends with many amazing influential minds in Minecraft as a community, though the members of Minecraft themselves didn't pay much attention to us, I actually think some of them took jabs at us more than once. That was a bummer.
ALL THE DATAS!
What we tried to show, but did not always do so well with so much going on, is this really was a massive “experiment” that had a lot of potential impact in a LOT of places, not just games. But there is a good reason we kept attracting academics, amazing engineers, investors, and also REALLY pissing people of or outright confusing them. No matter what if you were exposed to what we were doing you paid attention in some way and you certainly had feelings about it in some way.
But our approach was meant to be genuine, its why we were always so transparent with stuff to the point that the industry saw us as these cathartic consumers, not developers like them. Also, nothing I can be upset about now, I get it, especially after now having spent so much time in the “Core Industry”. All that Data, all that research, all the things we were doing we wanted to go back to all of the community. To expose the environment to allow Academia to connect and understand people better, to demonstrate what young minds could do with the correct understanding, tools, and support, to break several walls with one elegant and comically obvious statement, the only way to solve a slew of problems, was understanding and empowering young minds to see how brilliant they all really are and not let us dumb adults or “professionals” make them feel otherwise.
What was NOT obvious to a lot of people, even some of them involved, is we were creating a massive digital brain. This was a result of the kind of engine we built and tools as well. The latest iterations actually had the capacity to understand the input of one thing and the output of another. And it was getting WAY faster with time. This impacted AI, which could learn from players, matchmaking, content distribution, everything. And it was funnier still that we got better at doing it the more we realized that doing it the way the game industry does, would never work.
This is something we wanted to build and give to everyone. It made the most sense that if we can create a great environment for people to play and learn from, they would naturally want to make their own versions of it and people would challenge those things and create new games with it which we could empower. This is also why we became VERY close to influencers in preparation for that. We had “big plans”, which were honestly ridiculously obvious, but we had too many motivations in too many directions which stopped that from happening.
Fighting Hacker Culture
Something that I often forget myself, but have found better ways of reminding myself of, is we are all hackers in nature, very few of us are architects or have the time to question “What was the impact of this thing”, we idolize a hack and create religions around it. Unity, Unreal, Playfab, Python, R, C++, Organic Food, Emotional Intelligence, Data Science, AI, etc, etc, etc. It all “Does a thing”, but we don't take the time to ask “WTF is it REALLY doing and how can it solve my problem better, or do I REALLY need to reinvent the wheel here”.
That being said, a hackers mentality very young when validated and accepted by the people that matter most, the masses, consumers, gamers, etc THAT is powerful and meaningful. THAT is what we wanted to empower. It's not that people who are set in their ways could NOT figure stuff out, it's sad that sometimes as adults and professionals we don't have time to ask why and we are trained to get ready to defend anything we believe in with emotions so big logic doest exist anymore.
But with games, this was powerful to us. We could empower young scientists, engineers, hackers, gamers, storytellers, who were not yet programmed in how to TREAT people, but still curious who could learn and leverage one another to do great things. That to use was disruptive, we didn’t talk about that dream much in the studio, we knew it would freak people out, but the few of us involved leading who drove it, or who was with us but had to leave, we all kept it as our motivation and reminded one another in every way we could.
The Power of Influence
The influencers were also people we became VERY close to early on, not just for their insights into the culture of gaming, but also to find ways of creating a new “Business Structure” for them to connect with indies and modders. Our goal was to not control that business, but create better connections and better tools. To us, this was sadly the most obvious thing to do.
Influencers have a LOT of Data, and even their MCNs have big teams that are sorta... well, built by humans with not great motivations, so it's hard to make sense of Data. On the contrary, we were CREEPY good with that stuff, so we were starting to explore that wedge in between. Not ironically, the same stuff we were building for our own stuff was going to be used for the data coming in from the influencers too.
To us, the influencer was the PERFECT game publisher and the solution would have been able to negate their own reliance on brands, advertisers, MCNs, etc. They would get access to games and content, they would be able to stay genuine, modders could monetize, profits could be split with influencers, we would use smart data and “AI” (still hate that term) to create a full loop. More data would come in, Science, Tech, Influence, etc would all create a symbiotic ecosystem and we would just take cuts of profits to keep building tools to keep making everyone better.
Saying Goodbye
As a few of you may know, I live and breathed this project. It cost me a lot on an emotional level. While trying to find balance and losing myself it cost me a relationship with my children, it did result in a divorce, and it led to a lot of pain and anxiety from failing everyone. I sadly was raised to take on the rough situations, but I always got good at “processing” and “growing” as it were, its also what led me to do things in Academia and in Business which would ALSO make people feel some kind of way about me :P
While its unfortunate things ended this way, and I still hold the weight of it and I refuse to ignore it, I understand why it happened. In the end, it helped make me remember who I was before we got caught up with the money. But the things I did AFTER Nerd Kingdom were interesting, things I still do today. And still, the things I do today either REALLY piss people off, resulting in amazing friends or outright confuse people, but when they hear of me or see what I have to say, or awkwardly and playfully call out every buzz word they throw at me, they pay attention in some form.
I have been offered executive roles at HUGE studios in games, many of which you all know by name, many of which are also in the public eye for missing the mark. I have become friends with “big name Venture Capitalists” literally around the world, I have become an advisor, mentor, colleague or friend to these brilliant minds who are Loved, Hated or Ignored in the space and I keep them close. More important to me, I keep them close and connect them with one another, its important to me that people have a chance to connect and understand to challenge their views but ultimately see they are wanting the same things, even if they forgot their path to getting to their success.
Saying Hello?
I noticed something a year ago after about 40 ish studios or tech companies that I was being offered work at, was advising, consulting, or just visiting to say hi... they all suffered from the pains they seemed to be ignoring. Their views on Data, Culture, Games, Technology, and even AI was... well, human. The very nature of the market in games and the creative passions and pains of gamers created a divide that results in... well, human responses, which are not so great.
As it happened the further I looked I would see little pieces of technology we were building or trying to build at Nerd Kingdom were popping up. I took the time to chat with them, even brought some of their own tech and tools into games at a few studios, then started to formally start chatting with more studios to just ask questions. Turns out, we built at Nerd Kingdom the earliest version of what a few big game studios are doing now and in tech, but they seem to be overlooking how they should connect, which simply comes from language barriers inside and no time to REALLY solve problems.
Then, those few big names from big studios out there now who know me, some famous designers, some brilliant engineers, a few investors and a handful of others put some pressure on me to get something going again. And I think it might. As it happens my meetings with people are becoming more frequent, my brain is fried, I am connecting and reconnecting with like minds in places like IBM, Qualcomm, Google, and even a few very large tech entities and it seems that with a bit of dialog we are all on the same page.
NOPE
Now, if something DOES happen, it happens because it makes sense and I would not be so bold as to promise we went back to making TUG, that would be tricky, though I did adore the world, which was also VERY calculated at the time, but what I can promise is that my work will always involve efforts of giving more power and understanding to all of you.
If somehow I do manage to make another thing I have to personally be more careful about who is involved, the money, and the terms. Hard lesson learned and I have to do it in a way that would allow things to be flexible enough to make it up to the many of you that invested a lot of time, or money where you could support what we did, while I will say it, assuredly, none of us ever forgot. And none of you were ever just “consumers” to us.
While I do have VC colleagues even if they wanted to, you can’t fund nothing without them taking a LOT of control fast, so some of the stuff I am exploring is simply a mutually beneficial agreement with a group I know well. Help them build a thing, let me take a thing and use it to make the deal work correctly. I have been fortunate enough to have a team of people, even a few very famous influencers you wouldn't suspect to support us when the time is right.
The names involved and the people in some form are easy to figure out, but others are not so easy to figure out and some names I am talking to now I won't name drop, because in my business world that is still rude until they are more comfortable with the idea of doing business is this... awkwardly honest.
Playing It Cool
While Nerd Kingdom is dissolved and TECHNICALLY this is within my “Legal Rights” I do understand that sometimes business against its better nature can result in “actions”. So I’ll do the same thing I do with each awkward conversation in games, just in case anyone is paying attention from the business world.
To whoever is potentially considering legal actions, it would be a silly thing to do. While your legal teams may not understand this, your executives certainly know it, I pay attention to details and I am not motivated by money, which makes trying to take legal action on me VERY silly. On the contrary, I’m collaborative, helpful and tend to make everyone else a lot of money ALL the time if I am given the ability to do so and their own internal politics do not devour the work I do. 
In short, it's a short-sighted move to take action on someone that you know VERY well is capable of bringing people with a lot of potential together to do things with a lot of amazing potential. I do happen to have a LOT of friends in “big places” in the strangest way, I took the time to plant seeds and do a LOT of favors, any actions I am sure would be public for all the wrong reasons and in the end it would only validate the work I have done in the American and European market of the things we did and are capable of doing. So, I would rather just be friends :D
To My Nerds
To anyone that has had the patience to have gotten this far, don’t let me forget who I am, ever again. And thank you for the weight of failure I was able to pick up to remember it when I was important. I don't know how yet, or when, or in what form, but something is telling me that soon ill be doing something of interest, with a few people of interest, with a lot of potential.
Thank you, each one of you, troll, evangelist, or otherwise, you keep me human, in the best possible ways.
0 notes