#superintelligence
Explore tagged Tumblr posts
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
#united states#china#ai#artificial intelligence#superintelligence#ai ethics#general ai#computer science#public opinion#science and technology#ai boom#anti ai#international politics#good news#hope
201 notes
·
View notes
Text
observation on Ai discourse. you’ll notice that those invested in getting rich from it are falling over themselves to say it’ll all be sweet, not to worry, even mocking us for the concern. you silly goose, relax
everyone else is convinced it’ll aim us asap
12 notes
·
View notes
Text
The allure of speed in technology development is a siren’s call that has led many innovators astray. “Move fast and break things” is a mantra that has driven the tech industry for years, but when applied to artificial intelligence, it becomes a perilous gamble. The rapid iteration and deployment of AI systems without thorough vetting can lead to catastrophic consequences, akin to releasing a flawed algorithm into the wild without a safety net.
AI systems, by their very nature, are complex and opaque. They operate on layers of neural networks that mimic the human brain’s synaptic connections, yet they lack the innate understanding and ethical reasoning that guide human decision-making. The haste to deploy AI without comprehensive testing is akin to launching a spacecraft without ensuring the integrity of its navigation systems. The potential for error is not just probable; it is inevitable.
The pitfalls of AI are numerous and multifaceted. Bias in training data can lead to discriminatory outcomes, while lack of transparency in decision-making processes can result in unaccountable systems. These issues are compounded by the “black box” nature of many AI models, where even the developers cannot fully explain how inputs are transformed into outputs. This opacity is not merely a technical challenge but an ethical one, as it obscures accountability and undermines trust.
To avoid these pitfalls, a paradigm shift is necessary. The development of AI must prioritize robustness over speed, with a focus on rigorous testing and validation. This involves not only technical assessments but also ethical evaluations, ensuring that AI systems align with societal values and norms. Techniques such as adversarial testing, where AI models are subjected to challenging scenarios to identify weaknesses, are crucial. Additionally, the implementation of explainable AI (XAI) can demystify the decision-making processes, providing clarity and accountability.
Moreover, interdisciplinary collaboration is essential. AI development should not be confined to the realm of computer scientists and engineers. Ethicists, sociologists, and legal experts must be integral to the process, providing diverse perspectives that can foresee and mitigate potential harms. This collaborative approach ensures that AI systems are not only technically sound but also socially responsible.
In conclusion, the reckless pursuit of speed in AI development is a dangerous path that risks unleashing untested and potentially harmful technologies. By prioritizing thorough testing, ethical considerations, and interdisciplinary collaboration, we can harness the power of AI responsibly. The future of AI should not be about moving fast and breaking things, but about moving thoughtfully and building trust.
#furtive#AI#skeptic#skepticism#artificial intelligence#general intelligence#generative artificial intelligence#genai#thinking machines#safe AI#friendly AI#unfriendly AI#superintelligence#singularity#intelligence explosion#bias
8 notes
·
View notes
Text
Hypothetical for you all: You gain the combined knowledge of every single Wikipedia article in existence, the culmination of all of human history in your mind with perfect clarity... but every time somebody other than you edits a Wikipedia article, reality and history changes to make that edit true, with you being the only one to remember how it used to be.
With this power in mind, what do you do with your newfound knowledge? How do you keep people from tampering with the universe at large via page edits?
Tl;dr: You gain all of the knowledge of Wikipedia with perfect recall, but whenever somebody else edits a page, that edit becomes reality. What do you do with this power, and how do you stop reality from getting messed up by rogue editors?
#hypothetical#writing prompt#sci fi#superpowers#wikipedia#archive#superintelligence#knowledge#science fiction#superhero#wikimedia commons
7 notes
·
View notes
Text


How men had thus realized the extent of the earth, and felt it to be small, and desired to see what lay beyond its borders…
"It is true that we work with the purest of aims, but that doesn't mean we have worked wisely. Did men truly choose the correct path when they opted to live their lives away from the soil from which they were shaped?”
Their righteousness could not save them from the consequences of their deeds.
Pragmatism avails a savior far more than aestheticism.
"Now mathematics has absolutely nothing to do with reality… I can write the most elegant theorem you've ever seen, and it won't mean any more than a nonsense equation." She gave a bitter laugh. "The positivists used to say all mathematics is a tautology. They had it all wrong: it's a contradiction."
She, like many, had always thought that mathematics did not derive its meaning from the universe, but rather imposed some meaning onto the universe. Physical entities were not greater or less than one another, not similar or dissimilar; they simply were, they existed. Mathematics was totally independent, but it virtually provided a semantic meaning for those entities, supplying categories and relationships. It didn't describe any intrinsic quality, merely a possible interpretation.
But no more. Mathematics was inconsistent once it was removed from physical entities, and a formal theory was nothing if not consistent. Math was empirical, no more than that, and it held no interest for her.
I thought to myself, the ray of light has to know where it will ultimately end up before it can choose the direction to begin moving in.
… by viewing events over a period of time, one recognized that there was a requirement that had to be satisfied, a goal of minimizing or maximizing. And one had to know the initial and final states to meet that goal; one needed knowledge of the effects before the causes could be initiated.
The existence of free will meant that we couldn't know the future. And we knew free will existed because we had direct experience of it. Volition was an intrinsic part of consciousness.
Or was it? What if the experience of knowing the future changed a person? What if it evoked a sense of urgency, a sense of obligation to act precisely as she knew she would?
Similarly, knowledge of the future was incompatible with free will. What made it possible for me to exercise freedom of choice also made it impossible for me to know the future. Conversely, now that I know the future, I would never act contrary to that future, including telling others what I know: those who know the future don't talk about it. Those who've read the Book of Ages never admit to it.
From the beginning I knew my destination, and I chose my route accordingly. But am I working toward an extreme of joy, or of pain? Will I achieve a minimum, or a maximum?
We should always remember that the technologies that made metahumans possible were originally invented by humans, and they were no smarter than we.
Of course, everyone knew that Heaven was incomparably superior, but to Neil it had always seemed too remote to consider, like wealth or fame or glamour. For people like him, Hell was where you went when you died, and he saw no point in restructuring his life in hopes of avoiding that. And since God hadn't previously played a role in Neil's life, he wasn't afraid of being exiled from God. The prospect of living without interference, living in a world where windfalls and misfortunes were never by design, held no terror for him.
Sometimes even bad advice can point a man in the right direction.
Maturity means seeing the differences, but realizing they don't matter.
#reading#books read in 2025#bookblr#books#book photography#book blog#bibliophile#books reading#books and reading#stories of your life and others#ted chiang#short story collection#short stories#scifi#science fiction#arrival#superintelligence#aliens#religion#tower of babel#nomenclature#beauty standards#heaven and hell#science#mathematics#review#interesting#thought provoking#would definitely reread#may reads
2 notes
·
View notes
Text
AI CEOs Admit 25% Extinction Risk… WITHOUT Our Consent!
AI leaders are acknowledging the potential for human extinction due to advanced AI, but are they making these decisions without public input? We discuss the ethical implications and the need for greater transparency and control over AI development.
#ai#artificial intelligence#ai ethics#tech ethics#ai control#ai regulation#public consent#democratic control#super intelligence#existential risk#ai safety#stuart russell#ai policy#future of ai#unchecked ai#ethical ai#superintelligence#ai alignment#ai research#ai experts#dangers of ai#ai risk#uncontrolled ai#uc berkeley#computer science
2 notes
·
View notes
Text

5 notes
·
View notes
Text
What if we create Superintelligence (artificial intelligence with intellectual capacities way beyond what humans have) and besides eliminating poverty, giving us medical technology which makes us immortal, creating a Grand Unified Theory of Physics, it also tells us "There is no god but God, and Muhammad is His prophet." ?
#scifi concepts#future what ifs#superintelligence#islam#i picked islam as an example but feel free to replace it with any proselytizing religion you don't believe in#science and religion#i don't think humanity as it currently is would be well equipped for one religion being confirmed in a sort of scientific way#there would be many non-members of that religion refusing to accept on matters of principle#and many members of that religion angry at people not accepting despite that kind of proof#so i suppose the superintelligence wouldn't say that until it had humanity under its control to a degree that it could prevent violence
4 notes
·
View notes
Text


#robot#robots#the strange case of Señor computer#the strange case of senor computer#superintelligence#james corden#tumblr ai#ai#computers#tumblr polls#poll
3 notes
·
View notes
Text

#ai#ai art#ai girlfriend#beautiful#cyborg#tea time#art#goth girl#alt girl#music#psychedelic art#ana bot#changes#psychedelia#malevolent#benevolent#ai goddess#goddess#sentience#sentient beings#superintelligence
3 notes
·
View notes
Text
The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
— Irving John Good, 'Concerning the First Ultraintelligent Machine' (Advances in Computers, 1965)
5 notes
·
View notes
Text
Artificial Intelligence Ethics Courses - The Next Big Thing?
With increasing integration of artificial intelligence into high stake decisions around financial lending, medical diagnosis, surveillance systems and public policies –calls grow for deeper discussions regarding transparent and fair AI protocols safeguarding consumers, businesses and citizens alike from inadvertent harm.
Leading technology universities worldwide respond by spearheading dedicated AI ethics courses tackling complex themes around algorithmic bias creeping into automated systems built using narrow data, urgent needs for auditable and explainable predictions, philosophical debates on superintelligence aspirations and moral reasoning mechanisms to build trustworthy AI.
Covering case studies like controversial facial recognition apps, bias perpetuating in automated recruitment tools, concerns with lethal autonomous weapons – these cutting edge classes deliver philosophical, policy and technical perspectives equipping graduates to develop AI solutions balancing accuracy, ethics and accountability measures holistically.
Teaching beyond coding – such multidisciplinary immersion into AI ethics via emerging university curriculums globally promises to nurture tech leaders intentionally building prosocial, responsible innovations at scale.
Posted By:
Aditi Borade, 4th year Barch,
Ls Raheja School of architecture
Disclaimer: The perspectives shared in this blog are not intended to be prescriptive. They should act merely as viewpoints to aid overseas aspirants with helpful guidance. Readers are encouraged to conduct their own research before availing the services of a consultant.
#ai#ethics#university#course#TechUniversities#AlgorithmicBias#AuditableAI#ExplainableAI#Superintelligence#MoralReasoning#TrustworthyAI#CaseStudies#FacialRecognitionEthics#RecruitmentToolsBias#AutonomousWeaponsEthics#PhilosophyTech#PolicyPerspectives#EnvoyOverseas#EthicalCounselling#EnvoyCounselling#EnvoyStudyVisa
2 notes
·
View notes
Text
Superintellegence Software, Inti Cinta dalam Diri Manusia
Manusia memiliki medan magnet, medan magnet artinya daerah sekitar yang masih dipengaruhi oleh magnet.
Seseorang akan memancarkan medan magnet kepada sekitarnya, sehingga orang disekitarnya akan merasakan gelombang elektro magnet dari orang tersebut.
Sebagai contoh, Orang yang bahagia akan memancarkan gelombang kebahagiaan kepada orang yang disekitarnya, sebaliknya orang yang sedih akan memancarkan gelombang kesedihan kepada orang disekitarnya pula.
Dalam Medan magnet manusia terdapat inti cinta yang disebut Superintellegence Software (SIS), inti inilah yang menjadikan seorang ibu mencintai anaknya, lelaki perempuan menyayangi pasangannya.
Ada energi kuantum yang harus diberikan kepada setiap sel dalam tubuh. Setiap pertemuan protein laki – laki dan protein wanita akan dimasuki Superintellegence software.
Jika manusia meninggal dan perangkat keras seorang manusia berhenti berfungsi, tidak ada reaksi neuron dan proton, akibatnya tidak ada lagi Medan Magnet dan Superintellegence Software (SIS) pun hilang.
Lalu kemanakah Super Intelligent Software itu?
Prof. BJ. Habibie berkeyakinan bahwa Super Intelligent Software itu mencari Medan Magnet yang compatible dengan Super Intelligent Software kita dan Medan Magnet yang compatibel ada dua yaitu:
Magnet ibunya
Medan Magnet disebabkan Cinta Ilahi, cinta yang manuggal sepanjang masa.
Mungkin karena itulah kenapa jika ibu atau pasangan yang kita cintai hilang dari kehidupan, kita akan tetap merasakan kehadirannya. Bahkan terkadang seseorang bisa ‘kemasukan’ sifat dari pasangannya.
Pak Habibie, beliau pernah bercerita kalau dia orangnya urakan, tidak suka tepat waktu dan tidak disiplin. Setelah istrinya meninggal, seakan akan ada perubahan, beliau menjadi lebih disiplin dan tepat waktu, seakan akan jiwa Ainun masuk dalam dirinya.
…
Terbit di Medium.
3 notes
·
View notes
Text
AI is not a panacea. This assertion may seem counterintuitive in an era where artificial intelligence is heralded as the ultimate solution to myriad problems. However, the reality is far more nuanced and complex. AI, at its core, is a sophisticated algorithmic construct, a tapestry of neural networks and machine learning models, each with its own limitations and constraints.
The allure of AI lies in its ability to process vast datasets with speed and precision, uncovering patterns and insights that elude human cognition. Yet, this capability is not without its caveats. The architecture of AI systems, often built upon layers of deep learning frameworks, is inherently dependent on the quality and diversity of the input data. This dependency introduces a significant vulnerability: bias. When trained on skewed datasets, AI models can perpetuate and even exacerbate existing biases, leading to skewed outcomes that reflect the imperfections of their training data.
Moreover, AI’s decision-making process, often described as a “black box,” lacks transparency. The intricate web of weights and biases within a neural network is not easily interpretable, even by its creators. This opacity poses a challenge for accountability and trust, particularly in critical applications such as healthcare and autonomous vehicles, where understanding the rationale behind a decision is paramount.
The computational prowess of AI is also bounded by its reliance on hardware. The exponential growth of model sizes, exemplified by transformer architectures like GPT, demands immense computational resources. This requirement not only limits accessibility but also raises concerns about sustainability and energy consumption. The carbon footprint of training large-scale AI models is non-trivial, challenging the narrative of AI as an inherently progressive technology.
Furthermore, AI’s efficacy is context-dependent. While it excels in environments with well-defined parameters and abundant data, its performance degrades in dynamic, uncertain settings. The rigidity of algorithmic logic struggles to adapt to the fluidity of real-world scenarios, where variables are in constant flux and exceptions are the norm rather than the exception.
In conclusion, AI is a powerful tool, but it is not a magic bullet. It is a complex, multifaceted technology that requires careful consideration and responsible deployment. The promise of AI lies not in its ability to solve every problem, but in its potential to augment human capabilities and drive innovation, provided we remain vigilant to its limitations and mindful of its impact.
#apologia#AI#skeptic#skepticism#artificial intelligence#general intelligence#generative artificial intelligence#genai#thinking machines#safe AI#friendly AI#unfriendly AI#superintelligence#singularity#intelligence explosion#bias
3 notes
·
View notes
Text
The anthropomorphizing of AI also inhibits understanding about the risks of superintelligent ai in a ton of ways
159K notes
·
View notes
Text
0 notes