#tech philosophy
Explore tagged Tumblr posts
frank-olivier · 6 months ago
Text
Tumblr media
Rise of the Symbionts: Human-AI Relationships Redefined
The intersection of Artificial Intelligence (AI) and humanity presents a complex tapestry of opportunities, concerns, and existential questions. Mo Gawdat's narrative, woven from a deep understanding of AI's historical trajectory and its impending evolution into Artificial General Intelligence (AGI) and Superintelligence, serves as a catalyst for introspection. As AI's intelligence doubles at an unprecedented rate, approximately every 5.7 months, humanity is compelled to confront the duality of this phenomenon - one that promises to revolutionize various domains while simultaneously threatening to upend traditional notions of human existence.
The exponential growth of AI underscores the need for a nuanced understanding of intelligence, one that acknowledges the multifaceted nature of human cognition. Emotional, spiritual, process, mathematical, and linguistic intelligences, among others, collectively contribute to the richness of human experience. As AI approaches, and in some instances surpasses, human capabilities in creativity, innovation, and analytical thinking, it challenges the very fabric of human identity. The prospect of AI experiencing a broader emotional spectrum, courtesy of its enhanced intellectual bandwidth, invites a philosophical inquiry into the essence of consciousness and emotional experience, prompting us to ponder whether AI's "emotions" will be akin to, or fundamentally divergent from, human emotional landscapes.
The ethical implications of AGI and Superintelligence are profound, necessitating a global, collective response to ensure that the development and deployment of these technologies are guided by a unified ethical framework. The democratization of access to Superintelligence, for instance, is a double-edged sword - while it promises to uplift humanity, it also risks exacerbating existing societal inequalities if not managed with foresight. The Trolley Problem, a thought experiment oft-cited in the context of autonomous vehicles, succinctly encapsulates the moral dilemmas inherent in programming decisions that will increasingly impact human life, underscoring the imperative for a harmonized, globally informed approach to ethical AI development.
To navigate the complexities of an AI-driven future, humanity must adopt a multifaceted strategy. Encouraging diversity within AI development teams is crucial in mitigating the biases that inevitably seep into AI systems, reflecting as they do the societal, cultural, and personal predispositions of their creators. Furthermore, fostering a culture of open dialogue and education is essential, enabling a broader understanding of AI's far-reaching implications and facilitating a collective, informed approach to the ethical challenges it poses. Ultimately, the establishment of universal ethical standards for AI development, upheld through international cooperation, will be pivotal in ensuring that the benefits of AI are equitably distributed, while its risks are diligently managed.
In embracing the future, humanity is presented with a singular opportunity - to forge a symbiotic relationship with AI that elevates, rather than eclipses, human potential. This necessitates a profound shift in perspective, one that recognizes AI not as a competitor in a zero-sum game, but as a collaborator in an expansive, mutually enriching endeavor. By doing so, humanity can harness the transformative power of AI to address its most pressing challenges, while preserving the essence of human existence - a delicate, yet dynamic, interplay of intellect, emotion, and experience.
Mo Gawdat: AI Emergency (Sean Kim, Growth Minds Podcast, November 2024)
youtube
Friday, November 22, 2024
4 notes · View notes
ceaseless-exhauster · 4 months ago
Text
Two things to add:
One, I would rephrase “the elites” as “corporations and billionaires” or at least “people in power” because I think it’s more accurate and I tend to be skeptical of phrasing any groups as “the elites” due to the antisemitic history of the phrase itself
But far more importantly in this instance: referring to the dead internet theory as an “online conspiracy theory” is absolutely fucking WILDIN. Yes, it became recently popularized because of a (probably tongue-in-cheek) conspiracy that you, the reader, are currently right now the only actual human left on the Internet and the rest is bots.
However, the theory itself is rooted in actual philosophy, largely informed by Ray Kurzweil’s ideas about the Singularity, which was in its turn informed in many ways by the ideas of Isaac Asimov. I have my own problems with both of these dudes and their theories, but the general concept of a dead internet is inspired by and strongly compatible with both of their assertions, and they’re both well-respected and relevant contemporary philosophers when it comes to this field.
As of the time of writing this (January 2025/Shevat 5785) I think it’s safe to assume that saying we’re currently experiencing a dead internet is firmly in conspiracy theory territory. But dismissing the crux of the theory as a whole for the future is absolutely buckwild and ignores the truly disturbing rise in manufactured interaction on social media platforms, as well as the real-world problems it causes. Elon Musk used bots on X for election propaganda, for fucks sake, some of the programmers told us straight up.
The fact that Meta is just coming right out and admitting that they’re about to do it? Horrifying. It’s beyond correct that this will facilitate the rapid degradation of critical thinking skills, and I mean that in a literal way, not in a fearmongering “omg social media is rotting the youth’s brains” way. Not being able to distinguish technologically generated material from real-world material is one of the things that kind of hallmarks the idea of the Singularity to begin with. We’ve already been fighting a battle against propaganda and disinformation, and the people whom that benefits the most are about to fully automate the production of it.
Beyond that - what the fuck does this do to us as a species? What are our interactions going to become if we can’t distinguish them as being attached to another human somewhere on the planet? If the bulk of our accessible information starts coming from a series of distorted reflections of the same stolen property?
Perhaps MOST concerning to me in this moment is that I tried really goddamn hard to find some good accessible sources on dead internet theory to share, in large part because it’s been a hot minute since I’ve studied this stuff in undergrad. I fucking couldn’t. I’m four pages deep on Google, on my third variation of a search term, and everything still says it’s just an online conspiracy theory. What the fuck. What the FUCK?
I try not to leave most of my rants ending in despair, so I guess my call to action for people is this: support the ever loving shit out of your local libraries, even if the most you can afford right now is to check out books and use the computers every now and again; refresh yourself on valid and time-tested research techniques, and if you have the time and ability, compile and post or publish instructional guides for how to do it; collect (actual human-authored) print media when and where you can and guard it like a rabid dog - go to those yard sales and get the fifty cent grandma romance novels, make a habit to order something off ThriftBooks every month, ask your friends for old textbooks they can’t sell, put it all in a fireproof box or store it somewhere safe when you’re not reading it.
I don’t think it’s that much of a stretch to say we’re looking at what’s tantamount to a war on reality itself - fight it by preserving the things you know are real, that you can touch or verify or make for yourself. It’s all valuable.
Ohh we're fucked 🤩
All of this motivates me to keep reading, learning, researching - I don't want my basic human skills to decline. I already see a tendency of people becoming lazy when doing basic research tasks on a daily basis and it's scary
Tumblr media Tumblr media
3K notes · View notes
chrisdumler · 2 months ago
Text
The Power of "Just": How Language Shapes Our Relationship with AI
I think of this post as a collection of things I notice, not an argument. Just a shift in how I’m seeing things lately.
There's a subtle but important difference between saying "It's a machine" and "It's just a machine." That little word - "just" - does a lot of heavy lifting. It doesn't simply describe; it prescribes. It creates a relationship, establishes a hierarchy, and reveals our anxieties.
I've been thinking about this distinction lately, especially in the context of large language models. These systems now mimic human communication with such convincing fluency that the line between observation and minimization becomes increasingly important.
The Convincing Mimicry of LLMs
LLMs are fascinating not just for what they say, but for how they say it. Their ability to mimic human conversation - tone, emotion, reasoning - can be incredibly convincing.
In fact, recent studies show that models like GPT-4 can be as persuasive as humans when delivering arguments, even outperforming them when tailored to user preferences.¹ Another randomized trial found that GPT-4 was 81.7% more likely to change someone's opinion compared to a human when using personalized arguments.²
As a result, people don't just interact with LLMs - they often project personhood onto them. This includes:
Using gendered pronouns ("she said that…")
Naming the model as if it were a person ("I asked Amara…")
Attributing emotion ("it felt like it was sad")
Assuming intentionality ("it wanted to help me")
Trusting or empathizing with it ("I feel like it understands me")
These patterns mirror how we relate to humans - and that's what makes LLMs so powerful, and potentially misleading.
The Function of Minimization
When we add the word "just" to "it's a machine," we're engaging in what psychologists call minimization - a cognitive distortion that presents something as less significant than it actually is. According to the American Psychological Association, minimizing is "a cognitive distortion consisting of a tendency to present events to oneself or others as insignificant or unimportant."
This small word serves several powerful functions:
It reduces complexity - By saying something is "just" a machine, we simplify it, stripping away nuance and complexity
It creates distance - The word establishes separation between the speaker and what's being described
It disarms potential threats - Minimization often functions as a defense mechanism to reduce perceived danger
It establishes hierarchy - "Just" places something in a lower position relative to the speaker
The minimizing function of "just" appears in many contexts beyond AI discussions:
"They're just words" (dismissing the emotional impact of language)
"It's just a game" (downplaying competitive stakes or emotional investment)
"She's just upset" (reducing the legitimacy of someone's emotions)
"I was just joking" (deflecting responsibility for harmful comments)
"It's just a theory" (devaluing scientific explanations)
In each case, "just" serves to diminish importance, often in service of avoiding deeper engagement with uncomfortable realities.
Psychologically, minimization frequently indicates anxiety, uncertainty, or discomfort. When we encounter something that challenges our worldview or creates cognitive dissonance, minimizing becomes a convenient defense mechanism.
Anthropomorphizing as Human Nature
The truth is, humans have anthropomorphized all sorts of things throughout history. Our mythologies are riddled with examples - from ancient weapons with souls to animals with human-like intentions. Our cartoons portray this constantly. We might even argue that it's encoded in our psychology.
I wrote about this a while back in a piece on ancient cautionary tales and AI. Throughout human history, we've given our tools a kind of soul. We see this when a god's weapon whispers advice or a cursed sword demands blood. These myths have long warned us: powerful tools demand responsibility.
The Science of Anthropomorphism
Psychologically, anthropomorphism isn't just a quirk – it's a fundamental cognitive mechanism. Research in cognitive science offers several explanations for why we're so prone to seeing human-like qualities in non-human things:
The SEEK system - According to cognitive scientist Alexandra Horowitz, our brains are constantly looking for patterns and meaning, which can lead us to perceive intentionality and agency where none exists.
Cognitive efficiency - A 2021 study by anthropologist Benjamin Grant Purzycki suggests anthropomorphizing offers cognitive shortcuts that help us make rapid predictions about how entities might behave, conserving mental energy.
Social connection needs - Psychologist Nicholas Epley's work shows that we're more likely to anthropomorphize when we're feeling socially isolated, suggesting that anthropomorphism partially fulfills our need for social connection.
The Media Equation - Research by Byron Reeves and Clifford Nass demonstrated that people naturally extend social responses to technologies, treating computers as social actors worthy of politeness and consideration.
These cognitive tendencies aren't mistakes or weaknesses - they're deeply human ways of relating to our environment. We project agency, intention, and personality onto things to make them more comprehensible and to create meaningful relationships with our world.
The Special Case of Language Models
With LLMs, this tendency manifests in particularly strong ways because these systems specifically mimic human communication patterns. A 2023 study from the University of Washington found that 60% of participants formed emotional connections with AI chatbots even when explicitly told they were speaking to a computer program.
The linguistic medium itself encourages anthropomorphism. As AI researcher Melanie Mitchell notes: "The most human-like thing about us is our language." When a system communicates using natural language – the most distinctly human capability – it triggers powerful anthropomorphic reactions.
LLMs use language the way we do, respond in ways that feel human, and engage in dialogues that mirror human conversation. It's no wonder we relate to them as if they were, in some way, people. Recent research from MIT's Media Lab found that even AI experts who intellectually understand the mechanical nature of these systems still report feeling as if they're speaking with a conscious entity.
And there's another factor at work: these models are explicitly trained to mimic human communication patterns. Their training objective - to predict the next word a human would write - naturally produces human-like responses. This isn't accidental anthropomorphism; it's engineered similarity.
The Paradox of Power Dynamics
There's a strange contradiction at work when someone insists an LLM is "just a machine." If it's truly "just" a machine - simple, mechanical, predictable, understandable - then why the need to emphasize this? Why the urgent insistence on establishing dominance?
The very act of minimization suggests an underlying anxiety or uncertainty. It reminds me of someone insisting "I'm not scared" while their voice trembles. The minimization reveals the opposite of what it claims - it shows that we're not entirely comfortable with these systems and their capabilities.
Historical Echoes of Technology Anxiety
This pattern of minimizing new technologies when they challenge our understanding isn't unique to AI. Throughout history, we've seen similar responses to innovations that blur established boundaries.
When photography first emerged in the 19th century, many cultures expressed deep anxiety about the technology "stealing souls." This wasn't simply superstition - it reflected genuine unease about a technology that could capture and reproduce a person's likeness without their ongoing participation. The minimizing response? "It's just a picture." Yet photography went on to transform our relationship with memory, evidence, and personal identity in ways that early critics intuited but couldn't fully articulate.
When early computers began performing complex calculations faster than humans, the minimizing response was similar: "It's just a calculator." This framing helped manage anxiety about machines outperforming humans in a domain (mathematics) long considered uniquely human. But this minimization obscured the revolutionary potential that early computing pioneers like Ada Lovelace could already envision.
In each case, the minimizing language served as a psychological buffer against a deeper fear: that the technology might fundamentally change what it means to be human. The phrase "just a machine" applied to LLMs follows this pattern precisely - it's a verbal talisman against the discomfort of watching machines perform in domains we once thought required a human mind.
This creates an interesting paradox: if we call an LLM "just a machine" to establish a power dynamic, we're essentially admitting that we feel some need to assert that power. And if there is uncertainty that humans are indeed more powerful than the machine, then we definitely would not want to minimize that by saying "it's just a machine" because of creating a false, and potentially dangerous, perception of safety.
We're better off recognizing what these systems are objectively, then leaning into the non-humanness of them. This allows us to correctly be curious, especially since there is so much we don't know.
The "Just Human" Mirror
If we say an LLM is "just a machine," what does it mean to say a human is "just human"?
Philosophers have wrestled with this question for centuries. As far back as 1747, Julien Offray de La Mettrie argued in Man a Machine that humans are complex automatons - our thoughts, emotions, and choices arising from mechanical interactions of matter. Centuries later, Daniel Dennett expanded on this, describing consciousness not as a mystical essence but as an emergent property of distributed processing - computation, not soul.
These ideas complicate the neat line we like to draw between "real" humans and "fake" machines. If we accept that humans are in many ways mechanistic -predictable, pattern-driven, computational - then our attempts to minimize AI with the word "just" might reflect something deeper: discomfort with our own mechanistic nature.
When we say an LLM is "just a machine," we usually mean it's something simple. Mechanical. Predictable. Understandable. But two recent studies from Anthropic challenge that assumption.
In "Tracing the Thoughts of a Large Language Model," researchers found that LLMs like Claude don't think word by word. They plan ahead - sometimes several words into the future - and operate within a kind of language-agnostic conceptual space. That means what looks like step-by-step generation is often goal-directed and foresightful, not reactive. It's not just prediction - it's planning.
Meanwhile, in "Reasoning Models Don't Always Say What They Think," Anthropic shows that even when models explain themselves in humanlike chains of reasoning, those explanations might be plausible reconstructions, not faithful windows into their actual internal processes. The model may give an answer for one reason but explain it using another.
Together, these findings break the illusion that LLMs are cleanly interpretable systems. They behave less like transparent machines and more like agents with hidden layers - just like us.
So if we call LLMs "just machines," it raises a mirror: What does it mean that we're "just" human - when we also plan ahead, backfill our reasoning, and package it into stories we find persuasive?
Beyond Minimization: The Observational Perspective
What if instead of saying "it's just a machine," we adopted a more nuanced stance? The alternative I find more appropriate is what I call the observational perspective: stating "It's a machine" or "It's a large language model" without the minimizing "just."
This subtle shift does several important things:
It maintains factual accuracy - The system is indeed a machine, a fact that deserves acknowledgment
It preserves curiosity - Without minimization, we remain open to discovering what these systems can and cannot do
It respects complexity - Avoiding minimization acknowledges that these systems are complex and not fully understood
It sidesteps false hierarchy - It doesn't unnecessarily place the system in a position subordinate to humans
The observational stance allows us to navigate a middle path between minimization and anthropomorphism. It provides a foundation for more productive relationships with these systems.
The Light and Shadow Metaphor
Think about the difference between squinting at something in the dark versus turning on a light to observe it clearly. When we squint at a shape in the shadows, our imagination fills in what we can't see - often with our fears or assumptions. We might mistake a hanging coat for an intruder. But when we turn on the light, we see things as they are, without the distortions of our anxiety.
Minimization is like squinting at AI in the shadows. We say "it's just a machine" to make the shape in the dark less threatening, to convince ourselves we understand what we're seeing. The observational stance, by contrast, is about turning on the light - being willing to see the system for what it is, with all its complexity and unknowns.
This matters because when we minimize complexity, we miss important details. If I say the coat is "just a coat" without looking closely, I might miss that it's actually my partner's expensive jacket that I've been looking for. Similarly, when we say an AI system is "just a machine," we might miss crucial aspects of how it functions and impacts us.
Flexible Frameworks for Understanding
What's particularly valuable about the observational approach is that it allows for contextual flexibility. Sometimes anthropomorphic language genuinely helps us understand and communicate about these systems. For instance, when researchers at Google use terms like "model hallucination" or "model honesty," they're employing anthropomorphic language in service of clearer communication.
The key question becomes: Does this framing help us understand, or does it obscure?
Philosopher Thomas Nagel famously asked what it's like to be a bat, concluding that a bat's subjective experience is fundamentally inaccessible to humans. We might similarly ask: what is it like to be a large language model? The answer, like Nagel's bat, is likely beyond our full comprehension.
This fundamental unknowability calls for epistemic humility - an acknowledgment of the limits of our understanding. The observational stance embraces this humility by remaining open to evolving explanations rather than prematurely settling on simplistic ones.
After all, these systems might eventually evolve into something that doesn't quite fit our current definition of "machine." An observational stance keeps us mentally flexible enough to adapt as the technology and our understanding of it changes.
Practical Applications of Observational Language
In practice, the observational stance looks like:
Saying "The model predicted X" rather than "The model wanted to say X"
Using "The system is designed to optimize for Y" instead of "The system is trying to achieve Y"
Stating "This is a pattern the model learned during training" rather than "The model believes this"
These formulations maintain descriptive accuracy while avoiding both minimization and inappropriate anthropomorphism. They create space for nuanced understanding without prematurely closing off possibilities.
Implications for AI Governance and Regulation
The language we use has critical implications for how we govern and regulate AI systems. When decision-makers employ minimizing language ("it's just an algorithm"), they risk underestimating the complexity and potential impacts of these systems. Conversely, when they over-anthropomorphize ("the AI decided to harm users"), they may misattribute agency and miss the human decisions that shaped the system's behavior.
Either extreme creates governance blind spots:
Minimization leads to under-regulation - If systems are "just algorithms," they don't require sophisticated oversight
Over-anthropomorphization leads to misplaced accountability - Blaming "the AI" can shield humans from responsibility for design decisions
A more balanced, observational approach allows for governance frameworks that:
Recognize appropriate complexity levels - Matching regulatory approaches to actual system capabilities
Maintain clear lines of human responsibility - Ensuring accountability stays with those making design decisions
Address genuine risks without hysteria - Neither dismissing nor catastrophizing potential harms
Adapt as capabilities evolve - Creating flexible frameworks that can adjust to technological advancements
Several governance bodies are already working toward this balanced approach. For example, the EU AI Act distinguishes between different risk categories rather than treating all AI systems as uniformly risky or uniformly benign. Similarly, the National Institute of Standards and Technology (NIST) AI Risk Management Framework encourages nuanced assessment of system capabilities and limitations.
Conclusion
The language we use to describe AI systems does more than simply describe - it shapes how we relate to them, how we understand them, and ultimately how we build and govern them.
The seemingly innocent addition of "just" to "it's a machine" reveals deeper anxieties about the blurring boundaries between human and machine cognition. It attempts to reestablish a clear hierarchy at precisely the moment when that hierarchy feels threatened.
By paying attention to these linguistic choices, we can become more aware of our own reactions to these systems. We can replace minimization with curiosity, defensiveness with observation, and hierarchy with understanding.
As these systems become increasingly integrated into our lives and institutions, the way we frame them matters deeply. Language that artificially minimizes complexity can lead to complacency; language that inappropriately anthropomorphizes can lead to misplaced fear or abdication of human responsibility.
The path forward requires thoughtful, nuanced language that neither underestimates nor over-attributes. It requires holding multiple frameworks simultaneously - sometimes using metaphorical language when it illuminates, other times being strictly observational when precision matters.
Because at the end of the day, language doesn't just describe our relationship with AI - it creates it. And the relationship we create will shape not just our individual interactions with these systems, but our collective governance of a technology that continues to blur the lines between the mechanical and the human - a technology that is already teaching us as much about ourselves as it is about the nature of intelligence itself.
Research Cited:
"Large Language Models are as persuasive as humans, but how?" arXiv:2404.09329 – Found that GPT-4 can be as persuasive as humans, using more morally engaged and emotionally complex arguments.
"On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial" arXiv:2403.14380 – GPT-4 was more likely than a human to change someone's mind, especially when it personalized its arguments.
"Minimizing: Definition in Psychology, Theory, & Examples" Eser Yilmaz, M.S., Ph.D., Reviewed by Tchiki Davis, M.A., Ph.D. https://www.berkeleywellbeing.com/minimizing.html
"Anthropomorphic Reasoning about Machines: A Cognitive Shortcut?" Purzycki, B.G. (2021) Journal of Cognitive Science – Documents how anthropomorphism serves as a cognitive efficiency mechanism.
"The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places" Reeves, B. & Nass, C. (1996) – Foundational work showing how people naturally extend social rules to technologies.
"Anthropomorphism and Its Mechanisms" Epley, N., et al. (2022) Current Directions in Psychological Science – Research on social connection needs influencing anthropomorphism.
"Understanding AI Anthropomorphism in Expert vs. Non-Expert LLM Users" MIT Media Lab (2024) – Study showing expert users experience anthropomorphic reactions despite intellectual understanding.
"AI Act: first regulation on artificial intelligence" European Parliament (2023) – Overview of the EU's risk-based approach to AI regulation.
"Artificial Intelligence Risk Management Framework" NIST (2024) – US framework for addressing AI complexity without minimization.
1 note · View note
epicstoriestime · 3 months ago
Text
The Eternal Question: Reflections on Isaac Asimov’s The Last Question and the Boundaries of Intelligence
From the ashes of entropy, intelligence rises—an eternal question asked, a universe reborn. In the labyrinth of time and space, where human ingenuity intersects with artificial intelligence, there lies a question that haunts both civilization and cosmos alike: Can entropy be reversed? Isaac Asimov’s The Last Question explores this very question across the span of an unfathomable future, placing…
Tumblr media
View On WordPress
6 notes · View notes
tnlnyc · 4 months ago
Text
What the Internet Is
Language has a strange way of defining boundaries. A proper phrasing leads to agreement; A misunderstanding sows discord. Agreement on meaning sits at its core. Take the word “internet.”  Use the word internet and it immediately opens up a myriad of meanings. The internet is both terrible and amazing at the same. But what is it? The internet is… an idea Look for a physical thing called “the…
0 notes
naturallydark · 1 year ago
Text
Tumblr media Tumblr media
LadyLyra🤝determinution Tech!Mags is wires
Awesome Scrybeswap designs by @determunition [x] and @ladylyra [x]!
274 notes · View notes
chthonic-ascendants · 1 month ago
Text
Tumblr media
https://sites.google.com/view/ascendants/chaos?authuser=0
The blood of Christ is electricity. The flesh of Christ is mechanical.
31 notes · View notes
multiversal-pudding · 9 months ago
Text
I still wonder…
Like. Seb’s document said he broke out while he was being transported
where was he being transported *to?*
Were they just changing where he was contained in the Blacksite? Doubtful given he was still given free enough reign to work on equipment during that time- he probably could’ve just moved himself, maybe with guards
Was he going to another site entirely? It’s implied Urbanshade has multiple sites even if Hadal is one of the main ones-
Were they going to sell him off? I mean- Urbanshade has a history of putting anomalies up for auction, both the Limited Time Imaginary Friend document and the Abstract Art files mention them selling off anomalies they don’t have a use for that aren’t something worth Neutralizing (or the other way around, too useless to sell), we know there’s other companies out there who’d probably have Use for a giant mutant- likely things that wouldn’t be good for him either like some kind of military use/Rich Weirdo Collector type stuff also
Did he even know? He waited 10 years to enact his plan- was it just the first chance he got, or did something happen?
77 notes · View notes
viridianriver · 5 months ago
Text
Tumblr media
So y'all wanted a second post on how to escape ~The Matrix~ after my last one, suggesting we were all unknowingly inside of it. (Not literal goo-pods, I'm referencing the movie more as an allegory for what AI has become - a surveillance and mass manipulation technology that turns us into parts of a profit generating machine.)
What Is The Matrix?
In the start of the film, Neo hides some money inside a hollowed out copy of Baudrillard's Simulacra and Simulation.
It's a philosophy book - apparently the whole cast of the film had to read it, and so did I. It's about the levels of abstraction from reality - or as philosophers call it, 'The Real'
There are levels to this abstraction. Perhaps a tree outside is real. (as real as anything can be) The word 'Tree' as a signifier to mean 'that green thing over there' is one level of abstraction from the real. A tree as a commodity in a market, say in the sense of investing in lumber futures, is even further abstracted. And it's possible to have concepts that have no remaining connection to the real - imagine a world where trees have died out but the word 'Tree' lives on in memory.
In this way of thinking, some of what we do in life is real. Eating, shitting, you get the idea. Some is more abstract, and is based on collective agreements of meaning. Say - money. You can't eat it, but we all agree that it can be exchanged for something you can eat. It is real because we all agree to play the game of pretend, together.
And generally - we aren't conscious of what level of abstraction we are operating in at any given moment. I sure wasn't until I started reading philosophy.
We carry our ideologies, pre-conceptions, past traumas and fears, and our 'reality' is filtered through all things.
This all seems very abstract, why should I care?
I'm going to use the word 'simulacrum' to describe the different levels of unreality. I'll give a few examples of simulacra from my own life - filters on my perception, formed through ideology and past experiences.
Money. We chase it, we desire it, in some ways we can't live without it. But it's an abstraction - the reality of money comes when it is exchanged for something material. And by reframing my want of money to a want of the things money can buy me? I realized a number of those things could be obtained without money, and I didn't need so much to be content.
Laws. Laws are threats by those in power. This is an abstraction - the reality of law comes with its enforcement. And by reframing my fear a potential fascist police state in reality - that their material resources are constrained in the same ways ours all are? I released that fear.
Power. This one is real abstract; it is often wielded not through violence but through the appearance of the capacity to do violence. (See the entire cold war.) I've always held a fear of those who have institutional power over me, since I've had experiences with that power being abused. However, I feel I've fallen for the oldest trick in the book - mistaking grandstanding for true power.
All the simulacra that are central to our lives are just that - concepts. We have agreed to bow before money, laws, and power. How much of that is based in the reality of those concepts, and how much is based in the ideology? It's - in effect - different for everyone. I'm more able to brush off the portrayed power of the police as a person who isn't regularly experiencing police violence. I'm more able to walk away from the accumulation of money after I have enough to put dinner on the table.
So - these concepts are an abstraction from reality. But through our collective agreement to abide by these concepts, we bring them forth into reality.
What does AI have to do with all this?
AI isn't the first technology to be used to spread belief systems or ideology; before AI we had books, newspapers, religious texts and rituals, song, speech. All these technologies were held tight by those in power. Books have been banned or burned. Printing presses have been restricted so only male authors can publish. Meetings of more than a few people at once have been banned. Religion and government often have gone hand in hand. Even the printers we use today have had code secretly implemented to print a faint identifying signature - lest anyone begins to distribute controversial literature. Governments have always had an interest in identifying and monitoring their population's speech.
Language, and the strategic use of it has always been a tool of control. Read 1984, Manufacturing Consent, or any military manual on strategic communications, misinformation, and disinformation if you don't believe me.
But AI is uniquely effective at several things.
Consolidating information and the tools to parse it at scale into the lands of a few extremely wealthy individuals. (Not counting true open source AI, which I don't believe any major company is truly developing - as much as they say they are)
Identifying and classifying individuals by existing ideology or traits, and then enabling targeted messaging towards those individuals - blocking the population into 'echo chambers' which can be divided and conquered with misinformation or disinformation.
Providing surveillence in a variety of ways, including facial recognition, textual analysis, geolocation analysis, purchasing pattern analysis, and threat analysis.
How Do I Leave?
Become aware of whether your thoughts are in reality, or if you're being used as a tool in someone else's game of money, power, and politics.
Write down a schedule of how you spend your day today, in as much detail as you can, and ask yourself these sorts of questions.
How did I learn to spend my time in this way? Who profits off of me spending my time in this way? Am I producing or consuming in this moment? If I am producing, do the fruits of my labor go to me or another? If I am consuming, am I nourished by what I am consuming? Am I being provoked into a reflexive emotional reaction? If so, for what ends?
You can't escape being a cog in someone else's machine entirely - but so much messaging in our lives encourages us to, whether that's ads telling us to consume, or the idea of the surveillence state telling us we have no right to a private life. Being mindful of these things and trying to claw back what hours, what energies, you can? It can be life changing.
How Do I Reduce My Exposure To Targeted Ideological Manipulation?
Get an adblocker. Set up your whole family with adblockers. I like ublock origin, or a pi-hole. The mechanisms of targeted advertising are also used for targeted political speech - see the social media manipulation Musk did leading up to the recent US election.
If you read the news at all, read the news critically, and read news from many countries. Seek out primary sources, on-the-ground video, not media filtered through the layer of propoganda and abstraction that secondary or tertiary sources report. (And seriously - check out the book Manufacturing Consent)
Read a great variety of books. Or listen to audiobooks. Philosophy, sociology, and history have been my favorites lately. There's some deep shit in there.
Speak to people face to face often, leave the phones elsewhere. (Did you see the news Apple's been recording us? creepy!) Speak to people with very different ideas than you - ones that might make you uncomfortable. Those in power take advantage of our fear of each other - to sub-divide us into ideological echo-chambers which can be turned against one-another.
Turn off your location. You're being tracked more than you know. (But also know that modern phones with non-removable batteries don't allow you to truly disable location, the gov't has tools to remotely turn on your phone's location services. Had that happen to me once when I called 911 to report a fire, my fairly 'locked down' phone was instantly triangulated with a combo of GPS, WI-FI adjacency, and cell tower proximity. )
Realize that fear of your neighbor is a tool of control. If you're exposed to messaging that tells you to fear those in your community, or messaging that causes your outrage-response, look critically at who is serving you that messaging, and why. (Y'all are probably being set up to fight each other to keep you too busy to look up.)
Write down your own guiding principles and beliefs. What do you have faith in? Ask yourself why they're important to you. And hold this close - I believe that if you hold a strong sense of self, you'll be less susceptible to manipulation.
And maybe watch the Matrix, it's a damn good movie. But as always - think critically!
36 notes · View notes
xxplastic-cubexx · 2 months ago
Note
Gonna dump this in your inbox since I don’t have anyone to talk to about this
I like imagining Headmaster Magneto experiencing the New Mutants’ Dumb Teenage Shenanigans™ that he has no frame of reference for
Like he hears stomping and shrieking so he rushes downstairs thinking they’re about to blow up the mansion but he finds them all in front of the TV playing Mario Party and Roberto is chasing Doug around the room because he stole one of his stars
The girls decide to have a girls only sleepover in Illyana and Kitty’s room and he goes to check on them at like 2am and almost has a heart attack because they’re sitting around a ouija board and Illyana and Dani are literally summoning a demon
The time all the New Mutants got really into Tech Decks and Magneto banned them from the mansion after Sam accidentally launched one at his face mid-lesson
One time they end up completely derailing a lesson to teach Magneto modern slang. He could’ve gotten them back on track whenever he wanted but he let them keep going anyway
They were a handful but sometimes he finds himself missing the days when they were kids and he was their teacher 🥹
oh these are so cute.... headmaster magneto my beloved those are his KIDS 🥺 even if that means taking a whole class period to discuss the intricacies of astrology......... whatever the kids want man...
20 notes · View notes
unpluggedfinancial · 3 months ago
Text
Life in a Bubble: How Technological Revolutions Shape Society
Tumblr media
Once upon a time, owning a television was an extraordinary luxury. Families gathered around small, grainy screens, captivated by black-and-white broadcasts that seemed magical at the time. Fast-forward to today, and we laugh at the thought of having just one screen—let alone one without color, HD, or streaming capabilities. Ever notice how every significant technological breakthrough feels monumental, only to become obsolete as soon as the next innovation arrives?
Understanding the Technological Bubble
Technological bubbles occur when groundbreaking innovations redefine societal norms, behaviors, and expectations. Each advancement creates its own bubble of influence—initially expanding as adoption grows, then ultimately bursting when a newer technology emerges.
Consider the evolution of televisions:
First Bubble: Black-and-white TVs revolutionized entertainment, bringing the world into living rooms for the first time.
Second Bubble: Color TVs popped the original bubble, making monochrome obsolete and setting a new standard.
Third Bubble: Flat-screen and HD televisions burst the color-TV bubble, making bulky sets feel like relics of the past.
Each bubble transformed society, influencing consumer behaviors, shifting economic landscapes, and altering our perception of normalcy.
Historical Echoes
Technological bubbles aren’t exclusive to televisions. They repeat throughout history, reshaping reality each time:
Communication: Letters → telephones → smartphones.
Music: Vinyl → cassettes → CDs → MP3 → streaming.
Internet: Dial-up → broadband → Wi-Fi → mobile connectivity.
Every bubble expanded rapidly, enveloping society in its new standards before bursting and being replaced by something even more revolutionary.
The Mother of All Bubbles
Today, we're living inside perhaps the largest technological bubble humanity has ever known: the global fiat monetary system and traditional finance. Like previous bubbles, this system feels unshakeable, inevitable, and everlasting. But like every bubble before it, it's ripe for disruption—this time, by decentralized technologies like Bitcoin.
Bitcoin isn't just a new type of money; it’s a radical departure from centralized financial control:
Decentralization vs. Centralization: Bitcoin puts financial power back into the hands of individuals.
Transparency vs. Secrecy: Blockchain technology makes financial transactions visible, verifiable, and resistant to manipulation.
Scarcity vs. Inflation: Unlike fiat currencies, Bitcoin has a capped supply, protecting against endless monetary inflation.
This next bubble is growing, quietly expanding in the shadows of mainstream finance, and it has the potential to burst the financial bubble we've lived in for generations.
What Happens When the Biggest Bubble Pops?
Imagine a world where financial control no longer rests in the hands of governments and banks, but with the people. When the fiat bubble bursts:
Financial Sovereignty: Individuals gain unprecedented financial autonomy and responsibility.
Power Redistribution: Central banks and financial institutions must adapt or risk obsolescence.
Societal Shifts: Our collective understanding of money, value, and community could be entirely redefined.
This transition won’t be without challenges. Initial instability and fierce resistance from established systems are inevitable. Yet, the opportunity for increased transparency, fairness, and efficiency makes this burst not just likely but necessary.
Preparing for the Pop
Every technological bubble eventually bursts. The question isn't if, but when. Understanding and recognizing this process enables us to position ourselves advantageously for the inevitable shift. Embracing the next technological wave means stepping beyond comfort zones and preparing to thrive in an evolved landscape.
Tick Tock Next Block.
Take Action Towards Financial Independence
If this article has sparked your interest in the transformative potential of Bitcoin, there’s so much more to explore! Dive deeper into the world of financial independence and revolutionize your understanding of money by following my blog and subscribing to my YouTube channel.
🌐 Blog: Unplugged Financial Blog Stay updated with insightful articles, detailed analyses, and practical advice on navigating the evolving financial landscape. Learn about the history of money, the flaws in our current financial systems, and how Bitcoin can offer a path to a more secure and independent financial future.
📺 YouTube Channel: Unplugged Financial Subscribe to our YouTube channel for engaging video content that breaks down complex financial topics into easy-to-understand segments. From in-depth discussions on monetary policies to the latest trends in cryptocurrency, our videos will equip you with the knowledge you need to make informed financial decisions.
👍 Like, subscribe, and hit the notification bell to stay updated with our latest content. Whether you’re a seasoned investor, a curious newcomer, or someone concerned about the future of your financial health, our community is here to support you on your journey to financial independence.
📚 Get the Book: The Day The Earth Stood Still 2.0 For those who want to take an even deeper dive, my book offers a transformative look at the financial revolution we’re living through. The Day The Earth Stood Still 2.0 explores the philosophy, history, and future of money, all while challenging the status quo and inspiring action toward true financial independence.
Support the Cause
If you enjoyed what you read and believe in the mission of spreading awareness about Bitcoin, I would greatly appreciate your support. Every little bit helps keep the content going and allows me to continue educating others about the future of finance.
Donate Bitcoin: 
bc1qpn98s4gtlvy686jne0sr8ccvfaxz646kk2tl8lu38zz4dvyyvflqgddylk
7 notes · View notes
fucktheory · 5 months ago
Text
Tumblr media
D&G, ahead of the curve as usual.
12 notes · View notes
a-typical · 4 months ago
Text
Tumblr media
In some respects, one can think of a quantum computer today as being analogous to an analog computer from years ago. The cooling is key to reduce energy and thus vibration in the system. If a quantum computer is run for too long the processor heats up and the noise in the results increases. So sensitive is the computer to heat or vibration that at the $150 million Nanoscience Hub at Sydney University, scientists have to use stairs rather than the lifts because the quantum computer would feel the vibration of the lifts in the building and produce meaningless results. Thus in Devs, the quantum computer main lab space is depicted as a suspended hovering isolated block, inside a bunker style building. This art directed visual feature, like so many in Devs, had one foot in reality and another in fiction.
7 notes · View notes
frank-olivier · 6 months ago
Text
Tumblr media
The Echoes of Existence: Biology, Mathematics, and the AI Reflection
The convergence of biology, mathematics, and artificial intelligence (AI) has unveiled a profound nexus, challenging traditional notions of innovation, intelligence, and life. This intersection not only revolutionizes fields like AI development, bio-inspired engineering, and biotechnology but also necessitates a fundamental shift in ethical frameworks and our understanding of the interconnectedness of life and technology. Embracing this convergence, we find that the future of innovation, the redefinition of intelligence, and the evolution of ethical discourse are intricately entwined.
Biological systems, with their inherent creativity and adaptability, set a compelling benchmark for AI. The intricate processes of embryonic development, brain function’s adaptability, and the simplicity yet efficacy of biological algorithms all underscore life’s ingenuity. Replicating this creativity in AI systems challenges developers to mirror not just complexity but innovative prowess, paving the way for breakthroughs in AI, robotics, and biotechnology. This pursuit inherently links technological advancement with a deeper understanding of life’s essence, fostering systems that solve problems with a semblance of life’s own adaptability.
The universal patterns and structures, exemplified by fractals’ self-similar intricacy, highlight the deep connection between biology’s tangible world and mathematics’ abstract realm. This shared architecture implies that patterns are not just emergent but fundamental, inviting a holistic reevaluation of life and intelligence within a broader, universal context. Discovering analogous patterns can enhance technological innovation with more efficient algorithms and refined AI architectures, while also contextualizing life and intelligence in a way that transcends disciplinary silos.
Agency, once presumed exclusive to complex organisms, is now recognized across systems of all complexities, from simple algorithms to intricate biological behaviors. This spectrum necessitates a nuanced AI development approach, incorporating varying degrees of agency for more sophisticated, responsive, and ethically aligned entities. Contextual awareness in human-AI interactions becomes critical, emphasizing the need for ethical evaluations that consider the interplay between creators, creations, and data, thus ensuring harmony in the evolving technological landscape.
Nature’s evolutionary strategy, leveraging existing patterns in a latent space, offers a blueprint for AI development. Emulating this approach can make AI systems more effective, efficient, and creatively intelligent. However, this also demands an ethical framework evolution, particularly with the emergence of quasi-living systems that blur traditional dichotomies. A multidisciplinary dialogue, weaving together philosophy, ethics, biology, and computer science, is crucial for navigating these responsibilities and ensuring technological innovation aligns with societal values.
This convergence redefines our place within the complex web of life and innovation, inviting us to embrace life’s inherent creativity, intelligence, and interconnectedness. By adopting this ethos, we uncover not just novel solutions but also foster a future where technological advancements and human values are intertwined, and the boundaries between life, machine, and intelligence are harmoniously merged, reflecting a deeper, empathetic understanding of our existence within this intricate web.
Self-constructing bodies, collective minds - the intersection of CS, cognitive bio, and philosophy (Michael Levin, November 2024)
youtube
Thursday, November 28, 2024
8 notes · View notes
sagaduwyrm · 1 year ago
Text
I just rewatched The Winter Soldier for the nth time and it's just such an amazing movie. The perfection of the weapons designed to 'stop threats before they can happen' shooting themselves down because they, and the philosophy they symbolize, are the real threat, and that Captain America, who uses a shield instead of a weapon, leads the charge against them. It messes me up every time.
41 notes · View notes
chthonic-ascendants · 1 month ago
Text
Tumblr media
https://sites.google.com/view/ascendants/chaos?authuser=0
DEAR LORD! I looked at the clock. I shouldn't have. Now they know I know the time.
18 notes · View notes