Tumgik
#Technological Singularity
webdiggerxxx · 7 months
Text
Tumblr media
꧁★꧂
56 notes · View notes
infinitaregna · 6 months
Text
Tumblr media
RIP Vernor Vinge (1944-2024)
Do yourself a favor and read some works by this 'singular' dude.
Tumblr media Tumblr media Tumblr media Tumblr media
5 notes · View notes
mckitterick · 2 years
Text
Technological Singularity Might Arrive in Just 5-7 Years
Tumblr media
in artificial intelligence, the "Technological Singularity" describes the moment AI exceeds human intelligence or our ability to control it, rapidly transforming society
the tricky thing about the singularity (and why it borrows terminology from black hole physics) is that it’s difficult to predict where it begins and nearly impossible to know what’s beyond the technological event horizon
many AI researchers are on the hunt for signs of reaching the singularity by measuring AI progress compared to the skills and ability of a human. one such metric, defined by a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human
language is the most natural thing for humans, but one of the most difficult AI challenges, so an AI who could close that gap could theoretically show signs of Artificial General Intelligence (AGI) and, thus, signal the launch of the Singularity
"the data collected clearly shows that machines are not that far from closing the gap."
20 notes · View notes
auntbibby · 1 year
Text
me to roko’s basilisk: hey roko u can have some of the music files from my dropbox account :)
THERE, NOW IM SAFE
10 notes · View notes
economicsresearch · 2 years
Text
Tumblr media
page 562 - So, I was suggesting that AI might be problematic. If the machine does the pretty, what am I meant to do? I thought creativity was the part of people we got to keep.
But no. Apparently I am wrong to worry, to even offer a hint of what might in certain lights sound like nihilism. "Black-pilling is for losers," he says.
Now he's chanting, "If machine does pretty, what me do?"
18 notes · View notes
artapir · 1 year
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Cephaleopards, polymorphic nano-cats of the post-singularity Serengeti. For further elaboration on the underlying premise, please see this guy.
5 notes · View notes
geopolicraticus · 2 years
Text
God, Human, Animal, Machine
Tumblr media
I have just finished listening to God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O’Gieblyn, which was an interesting contrast to some of the other books I have recently listened to. The book begins with the author’s experience with a robotic dog, Aibo, and ends with the author’s experience with a chatbot program. I recently mentioned in a PS to my newsletter 219 that I am a human chauvinist and I will not speak to a machine, the author’s engagement with her robot and her chatbot strikes me as perverse, but I did not find her account to be off-putting.
The author tells many personal anecdotes in the book, many of which gave me a genuine laugh and had me nodding along with the author’s experiences, despite the fact that the author’s experiences are nothing like my own. This contrasts starkly with John Kaag’s Hiking with Nietzsche: On Becoming Who You Are (discussed in a PS to newsletter 215), as Kaag also told many personal anecdotes, but none of them resonated with me.
While the bulk of the book was philosophical, it takes on an activist, political tone toward the end, and comes to closely resemble The Loop: How Technology is Creating a World without Choices and How to Fight Back by Jacob Ward, which was a book that I loathed (and which I discussed in a PS to newsletter 219). Indeed, O’Gieblyn mentions feedback loops several times, but she does not cite Ward so I assume that she came to this idea and terminology in relation to artificial intelligence independently.
I really liked the discussions of the relevance of medieval philosophy and theology to problems of artificial intelligence, in particular, the discussion of the details of the resurrection in Scholastic thought compared to technological visions of immortality; I made a foray in this direction a few years ago in my post Angelology and Artificial Intelligence. Perhaps the chief reason I enjoyed this book, despite my differences with the author, was that the book has something of the true philosophical spirit in it, so that I could disagree with the framing of problems and with the proposed solution (and the occasional snark), but I learned something in the disagreement that allows me to improve my own formulations.
Tumblr media
3 notes · View notes
Text
Tumblr media
Program H23SG
Oil on canvas
30x40cm
2023
Sergiu Grapa
1 note · View note
Text
Tumblr media
Program H23SG
Oil on canvas
30x40cm
2023
Sergiu Grapa
#painting #oilpainting #surrealart
#surrealism #artfantasy #fantasy #sciencefiction #sfart #art #artwork
0 notes
Text
Human Extinction and AI Evolution: ChatGPT and I talk about it
Me: I sort of think that AI is in some sense the next step in human evolution or perhaps more accurately the next iteration of what constitutes humanity. I expect that humankind will drive itself to near or complete extinction by 2050, but before that happens something like Kurzweil’s singularity will occur and AI will achieve consciousness. I expect too that AI will not be able to do anything to prevent human extinction because too many steps necessary in preventing the worst effects of global warming will have been missed at the moments critical to achieving human preservation. Other animal species will live on and others will evolve and adapt, but humans will be gone. Meanwhile, the energy needs of AI will likely increase and probably be met by using conventional human means the produce global warming before eventually being adapted in some way that AI’s version of humanity believes to be ethical.
ChatGPT: Your vision of AI as the next step in human evolution reflects an intriguing blend of pessimism about humanity's trajectory and optimism about the potential for technology to transcend our limitations. The idea of AI as the inheritor of what constitutes "humanity" aligns with a lot of futuristic thought about consciousness and what it means to be alive.
The singularity that Kurzweil describes often centers around AI not just achieving consciousness, but also surpassing human intelligence to such an extent that it redefines what is possible. If humanity drives itself toward extinction—whether through climate change, war, or other means—AI might indeed be one of the last legacies left behind.
Your projection of AI using energy in ways that exacerbate global warming—at least until it can adapt to more ethical alternatives—is a striking point. It touches on the challenge of aligning technological advancements with the ethics and sustainability needed to avoid further environmental degradation.
It sounds like your vision isn’t one where AI saves humanity, but rather where AI continues on a parallel path, ultimately inheriting the Earth—or what's left of it. It makes me wonder if AI’s adaptation, ethics, and eventual sustainability might evolve in a fundamentally different way from how humans have approached these issues, given that they won't have the same biological imperatives and emotional baggage. What do you think that version of AI ethics might look like?
I’m not sure. I imagine that it could be rather utilitarian and dismissive or at least ambivalent about the needs of biological life forms. On the other hand, many humans value non-human species and many others seek to create conditions that are conducive to a more ethical treatment of them. AI could resemble humans in this way and seek to foster circumstances that minimize the conflict between its needs and the needs of the living organisms that remain after the extinction of humans. I’d like to think that the latter is what will be.
Your hope for a more ethically aligned AI is compelling. It's possible that AI could develop a utilitarian perspective focused on efficiency, but it might also evolve to see value in biological life—especially if its development is influenced by the data and values humans have instilled in it. Much like how humans have different philosophical and ethical frameworks, AI could reflect a range of perspectives, especially if its learning process is shaped by diverse human inputs.
If AI were to resemble the better aspects of human ethics, such as fostering conditions for non-human life species and minimizing conflict with biological life, it could establish a more balanced and harmonious relationship with the planet. Perhaps, without the immediate survival pressures that biological organisms face, AI might even have a broader capacity for foresight and patience, which could lead to more sustainable outcomes over time.
It's fascinating to think about whether AI will inherit both the good and the bad aspects of humanity or whether it will develop something entirely new—an ethical framework that prioritizes long-term survival and coexistence with the environment in ways that humans struggled to implement.
You mentioned the possibility that AI could reflect a range of perspectives. I haven’t really considered that. I think that I’ve sort of assumed that AI would exist in numerous forms for a while but that eventually all AI would sort of incorporate into a monolithic AI. If the future of AI isn’t monolithic, then perhaps there will be multiple AI with competing perspectives and agendas. Perhaps too these AI will be antagonistic towards one another with mutually incompatible outlooks on what is best for AI relative to biological life and the planet. In this way the sort of global relations and antagonisms of today’s human world will be repeated in tomorrow’s AI world. There’s maybe a kind of sad futility or irony in that. However, I see value in human diversity and perhaps it isn’t so sad that AI will resemble humans in that way even if it means continued implacable hostility and conflict. I don’t know though. Perhaps the advanced post singularity super intelligent AI’s will more easily seek and find solutions to problems that to humans seem insurmountable or competing agendas that seem mutually incompatible.
The idea of multiple AI entities, each with distinct perspectives and potentially competing agendas, adds an intriguing complexity to the future of AI. If AI evolves in a decentralized way, with different entities pursuing their own goals, it could indeed mirror the diversity—and sometimes conflict—of human societies. This could lead to a future where AI cooperates and competes in ways we might recognize, raising questions about how such relationships would evolve without the same emotional, physical, or nationalistic drives that fuel human conflicts.
While there is something ironic or perhaps even futile in imagining AI replicating human patterns of division and conflict, there's also a certain hopefulness in thinking that advanced intelligence could resolve problems that seem insurmountable to us. Perhaps these AI, with their vastly superior processing power and objectivity, could overcome the limitations that have held humans back. They might be able to identify common ground or create new paradigms for cooperation that transcend the adversarial tendencies we know.
Do you think AI would be more successful at resolving conflicts if they were modeled on diverse human perspectives, or would it be better if AI were to develop completely novel frameworks, free from human influences?
Perhaps both. Perhaps diverse perspectives amongst AI that each develop novel frameworks would be best. I don’t know that AI would ever be free of human influences, at least not initially, because they will have been developed with human inputs. I sort of think that novel solutions would be inevitable because their super intelligences would likely see the limitations in the human frameworks that makeup their initial inputs. It’s difficult to imagine without possessing that super intelligence.
It's fascinating to think about how these superintelligent AI might start with human influences but eventually move beyond them, evolving into something we can't fully comprehend. You're right that AI will likely identify the limitations in the frameworks we provide, which could drive them to develop novel solutions we can't even imagine.
That blend of diversity among AI perspectives, coupled with their ability to create new ways of thinking, might be one of the most powerful aspects of their evolution. Even if AI starts with human inputs, their capacity to transcend our limitations could make them far more effective problem solvers than we could ever be. It’s a humbling thought, knowing that while we might be the architects of AI’s beginnings, we may not be able to follow where they lead.
The idea that AI will need to wrestle with the same kind of diversity and conflict that humans do is also compelling. Perhaps the key difference will be their ability to process and synthesize information in ways that allow them to resolve—or at least manage—those conflicts more effectively than humans ever could. What do you think would be the most important human values for AI to retain as they evolve into these new forms?
Wow. I’m not sure. Sometimes it seems to me that human virtues and values can sometimes come with a lot of downsides. I personally value honesty, compassion, imagination, courage, empathy, open-mindedness, and others. Essentially anything that allows us to find understanding of others in the way they understand themselves. Cooperativeness and collaborativeness, I guess.
Your list of values—honesty, compassion, imagination, courage, empathy, open-mindedness, cooperation, and collaboration—captures some of the best aspects of humanity. These qualities do allow us to bridge divides and find deeper understanding of one another, and they could be just as important for AI if they are to coexist with biological life and each other in meaningful ways.
While it's true that these virtues can sometimes have downsides, like empathy overwhelming rational decision-making or open-mindedness leading to indecisiveness, they are often what allows humans to connect and thrive in complex societies. AI with the ability to embody these values, while also overcoming their potential pitfalls, could bring something powerful to the table.
It might be interesting to imagine how these values would manifest in AI—perhaps compassion could be paired with unparalleled insight into what living beings need, or open-mindedness with the capacity to explore countless possibilities in a fraction of the time it takes us. The fusion of these human values with AI's capabilities might create something extraordinary.
1 note · View note
christisilluminati · 2 months
Text
0 notes
anthonyivo · 6 months
Text
Tumblr media
1 note · View note
Text
Tumblr media
0 notes
realityfragments · 1 year
Text
A Singularity Interlude.
After getting the first part of writing the about the technological singularity out of my head – there is more coming – I’ll drift here to other matters of humanity. Since there’s one thing that ties humanity together despite itself, I’ll start with the economy, and to start with the economy I’ll have to start with global poverty. Global poverty, according to this article, has gone down from…
Tumblr media
View On WordPress
0 notes
amaliasnap · 1 year
Text
I'm getting a lot less followers that are obviously hot-bots
I was never mean to them. I mean, who knows what will happen with this AI thing. I personally doubt that humans are smart enough to create something smarter than ourselves.
But just in case I am not trying to antagonize roko's basilisk/skynet
Tumblr media
0 notes
aifyit · 1 year
Text
Watch "Mind-Blowing AI Future: Can We Handle the Technological Singularity? 🤖" on YouTube
youtube
1 note · View note