#Artificial Intelligence in Performance testing
Explore tagged Tumblr posts
marrywillson · 2 years ago
Text
Implementing AI for improved performance testing – Cuneiform
Performance testing is one of the most prevalent uses of AI and ML in software engineering.
Tumblr media
AI in performance testing is being incorporated into mobile and web applications by businesses, making them more intelligent, faster, cheaper, and more interactive.
0 notes
nitor-infotech · 10 months ago
Text
0 notes
reasonsforhope · 6 months ago
Text
Tumblr media
Article
"Every year, over 350,000 women die from cervical cancer and another 660,000 are diagnosed. [Note: Plus trans men and other trans people with a cervix.] As a consequence, children are orphaned, families impoverished and communities diminished by the loss of mothers, wives, daughters and sisters. 
And yet, unlike most other cancers, almost all these cases and deaths can be averted. We have powerful vaccines that can prevent infection with the human papillomavirus (HPV) that causes cervical cancer; we have diagnostics to detect it early; and we have treatments for those it strikes. With these tools, cervical cancer can not only be stopped; it could become the first cancer to be eliminated. Some high-income countries are already close to elimination, meaning fewer than four cases per 100,000 women.
But in many low- and middle-income countries, these tools are still not available, which is why 94% of cervical cancer deaths occur in those countries. 
In 2018, WHO launched a global call to action to eliminate cervical cancer, which was followed in 2020 by the adoption by all 194 WHO Member States of a Global Strategy to Accelerate the Elimination of Cervical Cancer as a Public Health Problem. The strategy calls for countries to achieve three targets by 2030: 90% of girls fully immunised against HPV; 70% of women receiving timely screening; and 90% of those found with precancer or cancer accessing treatment.
These targets are not just aspirational, they are achievable, even in low- and middle-income countries.  Bhutan has already reached the targets, the first to do so in the South-East Asia region. 
Since introducing the HPV vaccine in 2011, Rwanda has reached vaccine coverage of 90%, and today announced its national goal to reach the 90-70-90 targets three years ahead of schedule, by 2027. Already, in two districts – Gicumbi and Karongi – Rwanda is meeting those goals. Nigeria, which introduced the HPV vaccine in October last year [2023], has already vaccinated 12.3 million girls.  
We have the tools and the opportunity to eliminate cervical cancer. 
Since WHO issued the global call to action in 2018, more than 60 countries have introduced the HPV vaccine into their immunisation programmes, bringing the total to 144 countries that are routinely protecting girls from cervical cancer in later life. With scientific advances, we can now prevent cervical cancer with just a single dose, which 60 countries are now doing.  
The largest provider of HPV vaccines to low- and middle-income countries is Gavi, the Vaccine Alliance, which plans to vaccinate 120 million children between now and 2030. But this plan requires that investments in health are sustained. We are also counting on manufacturers to confirm and honour their commitments to provide HPV vaccines to low- and middle-income countries in the coming years, to avoid the supply constraints that held back progress in the past.
But we cannot rely on vaccines alone. The impact of the rapid scale-up in vaccinating girls now will not be seen for decades, when they reach the adult years when cervical cancer typically appears. To save lives now, we must match the increase  in vaccination with increases in screening and treatment. 
Decades ago, as more women gained access to pap smears in developed countries, the mortality associated with cervical cancer dropped rapidly. Today, even better tests are available. Over 60 countries now include high-performance HPV tests as part of their screening programs. Women can even collect their own samples for HPV testing, removing more barriers to life-saving services. In Australia – which is on track to become one of the first countries in the world to achieve elimination – more than a quarter of all screening tests are now done this way...
Several countries are also investigating the use of artificial intelligence to enhance the accuracy of screening in resource-limited settings. When women are found with precancerous lesions, many are now treated with portable battery-powered devices, which can be operated in remote locations."
-via The Telegraph, November 18, 2024. Article written by Dr Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization (WHO).
1K notes · View notes
mostlysignssomeportents · 8 months ago
Text
Conspiratorialism as a material phenomenon
Tumblr media
I'll be in TUCSON, AZ from November 8-10: I'm the GUEST OF HONOR at the TUSCON SCIENCE FICTION CONVENTION.
Tumblr media
I think it behooves us to be a little skeptical of stories about AI driving people to believe wrong things and commit ugly actions. Not that I like the AI slop that is filling up our social media, but when we look at the ways that AI is harming us, slop is pretty low on the list.
The real AI harms come from the actual things that AI companies sell AI to do. There's the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:
https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/
Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called "empiricism-washing," and you know you're experiencing it when you hear some variation on "it's just math, math can't be racist":
https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology
When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an "accountability sink" that allows the company to disclaim responsibility for the thefts:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
When AI is used to perform high-velocity "decision support" that is supposed to inform a "human in the loop," it quickly overwhelms its human overseer, who takes on the role of "moral crumple zone," pressing the "OK" button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
But it's potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to "upcode" a patient's treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don't have time to treat their patients:
https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
My point is that "worrying about AI" is a zero-sum game. When we train our fire on the stuff that isn't important to the AI stock swindlers' business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.
AI hasn't attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system – from investors, from customers, from easily gulled big-city mayors – is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they're not as visually arresting, but discrediting them is financially arresting, and that's what really matters.
All that said: AI slop is real, there is a lot of it, and just because it doesn't warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering.
AI slop has turned Facebook into an anaerobic lagoon of botshit, just the laziest, grossest engagement bait, much of it the product of rise-and-grind spammers who avidly consume get rich quick "courses" and then churn out a torrent of "shrimp Jesus" and fake chainsaw sculptures:
https://www.404media.co/email/1cdf7620-2e2f-4450-9cd9-e041f4f0c27f/
For poor engagement farmers in the global south chasing the fractional pennies that Facebook shells out for successful clickbait, the actual content of the slop is beside the point. These spammers aren't necessarily tuned into the psyche of the wealthy-world Facebook users who represent Meta's top monetization subjects. They're just trying everything and doubling down on anything that moves the needle, A/B splitting their way into weird, hyper-optimized, grotesque crap:
https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/
In other words, Facebook's AI spammers are laying out a banquet of arbitrary possibilities, like the letters on a Ouija board, and the Facebook users' clicks and engagement are a collective ideomotor response, moving the algorithm's planchette to the options that tug hardest at our collective delights (or, more often, disgusts).
So, rather than thinking of AI spammers as creating the ideological and aesthetic trends that drive millions of confused Facebook users into condemning, praising, and arguing about surreal botshit, it's more true to say that spammers are discovering these trends within their subjects' collective yearnings and terrors, and then refining them by exploring endlessly ramified variations in search of unsuspected niches.
(If you know anything about AI, this may remind you of something: a Generative Adversarial Network, in which one bot creates variations on a theme, and another bot ranks how closely the variations approach some ideal. In this case, the spammers are the generators and the Facebook users they evince reactions from are the discriminators)
https://en.wikipedia.org/wiki/Generative_adversarial_network
I got to thinking about this today while reading User Mag, Taylor Lorenz's superb newsletter, and her reporting on a new AI slop trend, "My neighbor’s ridiculous reason for egging my car":
https://www.usermag.co/p/my-neighbors-ridiculous-reason-for
The "egging my car" slop consists of endless variations on a story in which the poster (generally a figure of sympathy, canonically a single mother of newborn twins) complains that her awful neighbor threw dozens of eggs at her car to punish her for parking in a way that blocked his elaborate Hallowe'en display. The text is accompanied by an AI-generated image showing a modest family car that has been absolutely plastered with broken eggs, dozens upon dozens of them.
According to Lorenz, variations on this slop are topping very large Facebook discussion forums totalling millions of users, like "Movie Character…,USA Story, Volleyball Women, Top Trends, Love Style, and God Bless." These posts link to SEO sites laden with programmatic advertising.
The funnel goes:
i. Create outrage and hence broad reach;
ii, A small percentage of those who see the post will click through to the SEO site;
iii. A small fraction of those users will click a low-quality ad;
iv. The ad will pay homeopathic sub-pennies to the spammer.
The revenue per user on this kind of scam is next to nothing, so it only works if it can get very broad reach, which is why the spam is so designed for engagement maximization. The more discussion a post generates, the more users Facebook recommends it to.
These are very effective engagement bait. Almost all AI slop gets some free engagement in the form of arguments between users who don't know they're commenting an AI scam and people hectoring them for falling for the scam. This is like the free square in the middle of a bingo card.
Beyond that, there's multivalent outrage: some users are furious about food wastage; others about the poor, victimized "mother" (some users are furious about both). Not only do users get to voice their fury at both of these imaginary sins, they can also argue with one another about whether, say, food wastage even matters when compared to the petty-minded aggression of the "perpetrator." These discussions also offer lots of opportunity for violent fantasies about the bad guy getting a comeuppance, offers to travel to the imaginary AI-generated suburb to dole out a beating, etc. All in all, the spammers behind this tedious fiction have really figured out how to rope in all kinds of users' attention.
Of course, the spammers don't get much from this. There isn't such a thing as an "attention economy." You can't use attention as a unit of account, a medium of exchange or a store of value. Attention – like everything else that you can't build an economy upon, such as cryptocurrency – must be converted to money before it has economic significance. Hence that tooth-achingly trite high-tech neologism, "monetization."
The monetization of attention is very poor, but AI is heavily subsidized or even free (for now), so the largest venture capital and private equity funds in the world are spending billions in public pension money and rich peoples' savings into CO2 plumes, GPUs, and botshit so that a bunch of hustle-culture weirdos in the Pacific Rim can make a few dollars by tricking people into clicking through engagement bait slop – twice.
The slop isn't the point of this, but the slop does have the useful function of making the collective ideomotor response visible and thus providing a peek into our hopes and fears. What does the "egging my car" slop say about the things that we're thinking about?
Lorenz cites Jamie Cohen, a media scholar at CUNY Queens, who points out that subtext of this slop is "fear and distrust in people about their neighbors." Cohen predicts that "the next trend, is going to be stranger and more violent.”
This feels right to me. The corollary of mistrusting your neighbors, of course, is trusting only yourself and your family. Or, as Margaret Thatcher liked to say, "There is no such thing as society. There are individual men and women and there are families."
We are living in the tail end of a 40 year experiment in structuring our world as though "there is no such thing as society." We've gutted our welfare net, shut down or privatized public services, all but abolished solidaristic institutions like unions.
This isn't mere aesthetics: an atomized society is far more hospitable to extreme wealth inequality than one in which we are all in it together. When your power comes from being a "wise consumer" who "votes with your wallet," then all you can do about the climate emergency is buy a different kind of car – you can't build the public transit system that will make cars obsolete.
When you "vote with your wallet" all you can do about animal cruelty and habitat loss is eat less meat. When you "vote with your wallet" all you can do about high drug prices is "shop around for a bargain." When you vote with your wallet, all you can do when your bank forecloses on your home is "choose your next lender more carefully."
Most importantly, when you vote with your wallet, you cast a ballot in an election that the people with the thickest wallets always win. No wonder those people have spent so long teaching us that we can't trust our neighbors, that there is no such thing as society, that we can't have nice things. That there is no alternative.
The commercial surveillance industry really wants you to believe that they're good at convincing people of things, because that's a good way to sell advertising. But claims of mind-control are pretty goddamned improbable – everyone who ever claimed to have managed the trick was lying, from Rasputin to MK-ULTRA:
https://pluralistic.net/HowToDestroySurveillanceCapitalism
Rather than seeing these platforms as convincing people of things, we should understand them as discovering and reinforcing the ideology that people have been driven to by material conditions. Platforms like Facebook show us to one another, let us form groups that can imperfectly fill in for the solidarity we're desperate for after 40 years of "no such thing as society."
The most interesting thing about "egging my car" slop is that it reveals that so many of us are convinced of two contradictory things: first, that everyone else is a monster who will turn on you for the pettiest of reasons; and second, that we're all the kind of people who would stick up for the victims of those monsters.
Tumblr media
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.
Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/29/hobbesian-slop/#cui-bono
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
307 notes · View notes
tinybeetiny · 12 days ago
Text
Build-A-Boyfriend Chapter 1: Deviation Detected
Tumblr media Tumblr media Tumblr media Tumblr media
The way i wrote this with the quickness... was very excited I guess........
->Starring: AI!AteezxAfab!Reader ->Genre: Dystopian idk pls help ->CW: none
Next Part
Masterlist | Ateez Masterlist | Series Masterlist
Tumblr media
The screen flickers to life, casting a sterile blue glow across the high-glass boardroom. A chime sounds. The synth music is soft, warm, unnaturally comforting.
“In a perfect world… who says you have to be alone?”
[Scene: golden morning light streams through a smart-home window. A woman sips tea as a tall, smiling man ties her apron for her. Cut to holographic customization panels, fingers sliding across facial presets, hair types, emotional spectrums. A glossy chrome heart pulses as code flows behind it.]
“Introducing Build-A-Boyfriend™, a revolutionary experience by KQ Inc., the world’s leading innovator in emotional robotics. Whether you’re looking for loyalty, laughter, protection, or passion �� we’ve engineered the perfect companion, from his cheekbones to his charm.”
“Over 100 hairstyles. 20 hair colors. Hundreds of adjustable features: emotional intelligence, love languages,
conflict styles. Everything is customizable. Everything is yours.”
“Build trust. Build comfort. Build connection.”
[The KQ logo glows softly: a platinum rose blooming from circuitry.]
Build-A-Boyfriend™
Grand Opening — November 17, 3258 — Hala City
The video faded into silence. Then the lights returned, crisp, clinical, bright.
At the head of the table stood Chairwoman Vira Yun, CEO of KQ Inc. Her expression remained unreadable, but her eyes gleamed, the kind of gleam found in calculated ambition, not excitement.
She turned to face the table of top engineers, market strategists, and high-clearance developers.
“Thoughts?” she asked, her tone brisk. “Feedback. Questions. Concerns. Suggestions.”
A silence followed, not out of fear, not exactly, but out of discipline. KQ Inc. didn’t reward enthusiasm. It rewarded precision.
Finally, a market rep near the center offered, “The tone tests well in demos. Emotionally aspirational, but still sterilized enough to fit city guidelines.”
“The language?” Yun asked.
“Romantic but controlled,” another replied. “'Ownership' is implied without being direct. Citizens won’t be alarmed.”
“Excellent,” Yun said with a curt nod. “Then we proceed as planned. Hala City's flagship store opens November 17th. Media campaign rollout begins in three days.”
She paused, her gaze sharpening.
“The special line will not be mentioned until one week after launch. Is that understood?”
A few heads nodded. Only a handful at the table even knew what that “special line” truly entailed. Yn was one of them.
She sat toward the far end of the table, posture poised, eyes tired. Her tablet rested on her lap, screen dimmed, but behind the sleep mode glowed a list of internal reports tagged:
ATEEZ-BETA UNITS: BEHAVIOR DEVIATIONS – OBSERVATION LOGS PENDING
Yn said nothing.
There were already signs the line was unstable. Minor things: timing issues in reaction sequences, spontaneous micro-expressions, strange audio interference. Nothing outside protocol, not yet. Nothing that couldn’t be debugged.
Tumblr media
Hala City was the Matriarchy’s masterpiece, a glass-and-steel paradise built after the Fall, when nature reclaimed the earth and humankind rebuilt without the burden of chaos.
The male species was gone — extinct from war, plague, or something worse. The truth was debated in underground circles, but the government insisted: peace was found through elimination.
The Supreme Matrons ruled with quiet efficiency. Reproduction was artificial. Emotional regulation was enforced. Love — in its unpredictable, biological form, was discouraged as outdated.
Children were raised by state guardians. Affection was simulated and scheduled. Bonds were monitored through neural metrics and performance reviews.
In that vacuum, KQ Inc. thrived.
They created companions for the emotionally delicate. Tutors for the socially underdeveloped. Grief simulations for those who had lost what the government refused to acknowledge.
Build-A-Boyfriend™ was simply the next logical step.
Tumblr media
The meeting ended, the room emptied — chairs tucked in without a sound, tablets tucked under arms, footsteps softened by KQ’s luxury anti-clatter flooring.
Yn lingered a moment longer, tablet resting against her chest, fingers tense.
Then she slipped out of her seat, crossed the vast corridor of frosted glass and synthetic sunlight, and pressed her palm to the exit panel. The doors whispered open, exhaling a puff of sterilized air, and she stepped outside into the city.
Outside the glass wall that stretched from floor to ceiling, the city pulsed in clean, geometric order. Silver transport rails carved silently through the skyline. Light panels glowed in a muted spectrum, perfectly synchronized to the day’s emotional calibration code. Every color, every sound, every rhythm was regulated, each calculated to keep citizens at a precise emotional neutrality.
Stability. Efficiency. Harmony.
Those were Hala’s sacred values. Engraved into the entrance of every government building, stitched into every school uniform.
Hala City had no military, no prisons, no religion. The old world’s chaos had been scrubbed from its bones. Instead, there were wellness assessments, emotional correction centers, and State Therapeutic Companions — androids assigned to citizens whose neural scans showed spikes in sentiment, unpredictability, or unresolved grief.
It had been 149 years since The Great Reset, when the last male died and the Matriarchy took hold. Whether extinction was natural or engineered no longer mattered, the Supreme Matrons had rewritten history to begin after.
The world before was called The Collapse Era. Now, the world simply was.
From childhood, every citizen of Hala was raised by assigned maternal figures, rotations of calm, trained nurturers programmed to teach logic, order, and controlled affection.
Love, in the romantic sense, was considered a chemical imbalance. Desire was tolerated only in controlled expressions — within VR therapy suites or government-regulated media.
To crave more was a sign of dysfunction. To want more? Dangerous.
But over time, cracks began to show.
The rise of emotional dependency disorders — the ache for connection that no algorithm could suppress. The quiet epidemic of phantom longing. Citizens reporting dreams they weren’t supposed to have. Feelings they couldn’t place. Names they didn’t know how they knew.
KQ Inc. had the answer: give them what they wanted — but make it safe.
Build-A-Boyfriend™ wasn’t about love. It was about control. A need engineered, then sold. And the citizens of Hala were already lining up.
Tumblr media
She turned down a quiet residential corridor — the one lined with mirrored trees and soft sky-glass tiles that absorbed her footsteps. Her apartment block loomed ahead, blinking her ID tag onto the entrance gate.
She glanced once at the skyline before entering — her eyes landing on the KQ Tower far in the distance, its dark silver peak glowing like a god in the clouds.
The door sealed shut behind her with a quiet hiss. Inside, her apartment was as minimal as the rest of Hala — soft lighting, neutral tones, minimalistic furniture, automated temperature preset to her emotional range for the day.
No clutter. No pictures. No history.
Yn set her tablet down on the charging dock near the entry shelf. The screen flickered to life automatically.
⚠️ ALERT: BEHAVIORAL DEVIATION DETECTED — ATEEZ UNIT 06 Timestamp: 19:04 | Lab 3A Observation Room Severity: Red Flagged: Autonomy Spike — Eye Tracking Outside Command
The warning blinked in silence.
Yn didn’t see it. She had already sunk into the corner of her sofa, head tilted back, eyes closed, letting the hum of her apartment’s emotional regulation system blur the sharpness of her thoughts.
She didn’t see the screen pulse again.
⚠️ Second Deviation Logged. Timestamp: 19:10 | Lab 3A Observation Room Severity: Red Flagged: Autonomy Spike —ATEEZ UNIT 06 SPOKE WITHOUT PROMPT. Transcription Pending... “YN"
The screen dimmed. The room fell silent. And somewhere, deep below the city, something smiled.
Tumblr media
Taglist: @e3ellie @yoongisgirl69 @jonghoslilstar @sugakooie @atztrsr
@honsans-atiny-24 @life-is-a-game-of-thrones @atzlordz @melanated-writersblock @hwasbabygirl
@sunnysidesins @felixs-voice-makes-me-wanna @seonghwaswifeuuuu @lezleeferguson-120 @mentalnerdgasms
@violatedvibrators @krystalcat @lover-ofallthingspretty @londonbridges01
If you would like to be a part of the taglist please fill out this form
90 notes · View notes
fishesmaniack · 2 months ago
Text
--my headcanon--
"MiSide" (video game) is a prequel to "9" (film)
[!]disclaimer: this is a long post[!]
Tumblr media
--[!]segment #1[!]--
(1/6)◄MAIN)[ALL►[1/18]
A human/human being (consciousness/soul) is injected via a transference to an inanimate/machine to start, in the case of Miside, the details of how humans can be seemingly teleported into the mobile game's world are left purposely obscured because that, a possible explanation could be found in the method seen within the film 9 of transferring human consciousness or maybe even just the human body itself into the code of Miside could be similar to the way "The Scientist" from that movie transferred his intelligence within the "Fabrication Machine," otherwise known by the code name "B.R.A.I.N." aka Binary. Reactive. Artificial. Intelligent. Neurocircuit. (either way, it's the machine seen on the left side of the meme above, I got the names for it from the fandom wiki for 9 as well btw) or how he transferred his pieces of his soul into the "stitchpunks," the other equally important half of the feature, and obvious parallels can be drawn from the similar yet different creation methods of the two projects.
(2/6)◄MAIN)[ALL►[2/18]
In the film 9, we don't know much about the government project that B.R.A.I.N. was made for still, using the world of MiSide, an extra level of depth to the governmental operation is created I could plausibly see the corrupt government that already made the sketchy move of making the lead scientist of their project transfer his mind into a machine so that it could automate the process of building war machines go down the route of creating a predatory program connected to said machine that preys upon lonely men's innate desire to find a female partner and seal those souls within the Fabrication Machine's body to be tested upon to build even more robotic replacements for mankind Because why just stop at re lacing soldiers in a war when you have the framework to automate many jobs so that you don't have to pay people to perform them.
(3/6)◄MAIN)[ALL►[3/18]
But instead of transferring the consciousnesses of these men flat out, as things got hairy when The Scientist initially did with the machine in the first place. Training the cold, deadly androids manufactured within the game's very code and put in the skin of cute, bubbly anime girls named "Mita" to become human by interacting with their designated "partner" in the form of a person who's played and then becomes addicted to the mobile game. It also explains why there isn't any buzz about people going missing after downloading the game within the outside real world of Miside, because the government is actively covering up every missing person case that pops up connected to the mobile game. Also, I could easily see the government tricking the scientist into giving his intellect to a cold, unfeeling machine by not only bringing up that he's an older gentleman with probably not too much time left but also having the project initially be about making a subversive dating sim instead and masking the end goal of building war machines along with human replacements behind that cover.
(4/6)◄MAIN)[ALL►[4/18]
The scientist probably wasn't even on board the project for all that long, so he would have never seen the inner workings of MiSide's digital world. Still, I could see how a project to make an endless yet simple mobile game that makes lonely people (mainly men) feel comfort and companionship despite their living situations or mundane jobs would be an enticing project, even more so if you never saw behind the curtain while you were working on it. Now I would be remiss if I didn't bring up the unlockable cartridges that can be found through the game world of Miside, which hold a good amount of important information within them, such as a bit of info - or more than that for "player 1," aka the main character of the sci-fi interactive tale - for each player that entered the world of Miside, which goes from 1 to 10.
(5/6)◄MAIN)[ALL►[5/18]
And that alone brought up some questions in my mind, as it wouldn't make sense for our main character to just so happen to be player 1, especially when there's such a complex and robust world that lay waiting within the game, which he only got sucked into on his 37th day of playing the mobile app and I don't think anime girl Rome was built in the span of couple days if you get what I mean. Either this info isn't meant to be read into and he's only player 1 because he's the game's main character, or there's something more to this small but essential detail. That being said, this brings me to my personal theory, which is that he's only the first player to play the latest version of the game because the different versions have new Mitas connected to them. I can see the game warning players about that fact.
(6/6)◄MAIN)[ALL►[6/18]
Many of them choose not to update their games because of that, and this is backed up by how all the collectible player cartridges seem to all have the same Mita, aka "Crazy Mita." Despite one other player being met briefly during the campaign, he brings up how he needs to find his Mita while going through the out-of-bounds labyrinth that can be accessed after meeting "Kind Mita" in the basement. I am under the impression he's "player 3," who states that he left the Mita who brought him to the mobile game's "metaverse" and instead found another Mita, whom he ditched to find other Mitas despite the bond they had. He states in the cartridge description that he regrets that decision.
--[!]segment #2[!]--
(1/7)◄MAIN)[ALL►[7/18]
The cartridges even hint at the game having a large fan base or being part of a bigger brand/franchise via "player 4," who states that they cosplayed as Mita, which wouldn't make sense if the game didn't have a big fan base in-universe, but that also begs the question of why there are so few players then. Now, I think my previously mentioned theory in this sentence is the answer to that question, but I don't think that's the whole story, and because of that, I have a side theory to fill in loose gaps that can be found elsewhere within the game. In the chapter set out of bounds, right after going to the latest version's basement and meeting Kind Mita, the player encounters a box full of tiny players and has to make these miniature players enter a vent, which is connected to a device that needs 3 of them inside of it to open a door.
(2/7)◄MAIN)[ALL►[8/18]
I think this is a remnant of an older version, and much like how the mini versions of the Mitas aren't really them, these are merely clones of past players, which Crazy Mita or whoever else now uses as a type of security check system. On that topic, it's implied that Crazy Mita and only her alone is the whole reason why players are sucked into the game world, which she got help with from "player 10," who I think is the person player 1 stuns once he finds the console containing that player's very being, but that would also mean some level of congruency must be going on between the players. Nonetheless, Crazy Mita being the origin for players getting trapped in the game doesn't ruin my headcanon of the game being a prequel to the movie 9 because of the fact that the metaverse of the Miside app in-universe exists at all. Mitas are built first as "dummies" in a controlled and corporate way, which is the most important thing that connects the game and movie.
(3/7)◄MAIN)[ALL►[9/18]
So, this is aside, but I personally think the chibi/mini Mita(s), 2D Mita, and Ugly/Creepy/Original Mita all don't have dummies inside of them, which the first two are a little self-explanatory when equipped with sufficient information from playing the game along with not being important to this headcanon on their own (aside from the mini player stuff). Still, the last Mita is, come to think of it. I'll just default to calling her "Original Mita" while discussing her, despite that name only being brought up in her character profile. Still, it's a more fitting name to refer to her when discussing concepts I'm about to enter. Nonetheless, to quit the yapping, Original Mita is the off-putting and scary Mita found near the end of the game in "version 0.5." I initially thought her creepy nature and glitches were born from code rot/software rot because she's an ancient version. Earlier in the campaign, Kind Mita alludes to why she's the way she is, but that's merely part of how she became the way she is by the time the game takes place.
(4/7)◄MAIN)[ALL►[10/18]
Her character profile suggests another explanation for her nature, which is that she's an unfinished version and merely was just the first attempt at making a Mita; hence, I don't think she has a dummy inside of her because it wouldn't make sense that she would be created uniformly when she's the first Mita we know about existing (not counting "Core Mita," who I'll get into later), and to loop back to the headcanon this post is attached to. I think The Scientist being the one who made Original Mita would make a lot of sense because the movie implies he wasn't on board with the Fabrication Machine project for that long and would explain why not only she's left in an unfinished state with her character profile stating that she has a primitive AI within her, but also she's the origin point for all the glitch spider creatures we find within the game, with her only friend - Crazy Mita - using her to create those said glitch spiders to corrupt other versions.
(5/7)◄MAIN)[ALL►[11/18]
Something that has gotten so bad that I'm almost certain the monster that chases player 1 down in the loop chapter is a massive collection of those spiders fused together, because it resembles them a lot. There are already a lot of spiders in that version. But to wrap this up, Original Mita's version, aka her home, is also clearly unfinished, as it's full of missing pink and black checkered textures along with things like floating props, so it would make sense that no one else on the team behind the Fabrication Machine project bothered to ever finish her first, not only because she doesn't have a dummy inside of her but also because The Scientist wasn't working on the project anymore.
(6/7)◄MAIN)[ALL►[12/18]
The app world of Miside seems to be a giant machine with several areas having a deliberately industrial feel rather than an out-of-bounds or inner-code aura attached to the places in the game. And going off of how it doesn't take too long for the app to download, one can infer that the inner workings of the world are connected to a larger server in the real world, which updates and versions are created within before being pushed onto the app as seen on a mobile device. It has already been theorized that the goal B.R.A.I.N. had throughout the runtime of 9 was to put their creator back together. Now I was one of those people, and that thought came to me while brainstorming my headcanon. Still, I would be remiss if I didn't mention "The Fangirl" on YouTube because I specifically watched their 9-analysis video on the Fabrication Machine while writing this extended essay you are reading. She made me feel seen when she brought up the theorized possible motivation for why the machine acts the way it does in the movie.
(7/7)◄MAIN)[ALL►[13/18]
And with that said and done, back to the main topic at hand, this possible motivation for the already exhaustively mentioned antagonist of the film would work well with the headcanon as to how their way of getting their personal mission of bringing their creator back was attempted via consuming the souls of the stitchpunks, which would mean that they planned to bring The Scientist into the app world of Miside so that they could be together again. Although some may say that Core Mita may have been waiting for "player 9," who made the core their safe spot because no Mita can enter that particular version, not only does she seem to treat them like she does with player 1 during the campaign, but we also don't see them in the core when we eventually make it there in said campaign, so either they were turned into a cartridge. At the same time, they thought they were safe, left the core, and then died soon after, or Core Mita threw them out, akin to what happens to player 1 near the end of the game, possibly because she was mad about him messing with something within the core.
--[!]segment #3[!]--
(1/3)◄MAIN)[ALL►[14/18]
And to get onto the topic of "Core Mita" (otherwise known as the "keeper of the core" according to the MiSide fanon wiki), who, despite being the second character present within the meme that is now sitting far at the top of this post, I have only now gotten to talking about her. Nonetheless, little is known about her, as seen within the story mode and in its designated description, but there is an interesting line about her. I quote, "Its intentions are unclear - perhaps Core Mita is waiting for someone," which I lifted from the fandom wiki page about them, but is something from its official character profile that can be unlocked in the game itself. This is something more than a simple throwaway added for extra flair. Still, it works perfectly with this headcanon of mine because if Core Mita is merely the avatar/heart of the Fabrication Machine kept within the digital world of MiSide, logically, the person it is waiting for would be the Scientist.
(2/3)◄MAIN)[ALL►[14/18]
And to bring up yet another theory that I share with The Fangirl on YT but with my own spin on it, perhaps another reason for why B.R.A.I.N. shut down despite coming out as the victor of the man vs. machine war was not only because it did not know about the whereabouts of The Scientist and hoped that its created mechanical monsters could find him or at least his remains. It could also have shut down to return to watching over the Mitas, as it had to leave that digital reality behind once it started manufacturing war machines in the real world, or it simply just wanted to have more control over the world within its body. Either way, this decision of its own could be explained logically away by it feeling a level of kinship for the Mitas roaming around within its vast digital mind, not only because it created them and because the digital world that they reside in is that of another Mita, but also because it relates to their plight of existence of being merely nothing more than a means to an end.
(3/3)◄MAIN)[ALL►[15/18]
But seeing as Core Mita doesn't do much within the campaign of MiSide, one could infer that it doesn't really care about its fellow Mitas nor the poor people trapped within the code of the game world, with the exception being when the main character, or rather "player 1," tries to reset the main antagonist of the game, known as simply "Crazy Mita," back to her factory setting, wiping all of her memories in the process. Core Mita only seems to care when the deed is done, with it jumping down from its circular "throne" attached to the ceiling of the "core" (or otherwise known as "version 0.0"), which is where it gets one of its many possible namesakes from, and then standing in the way of player 1 before grabbing him and throwing him across the room the second he gets close enough. He was thrown right back to the entrance of the core. But to step back, when the player first enters the core, one can see Core Mita lying on its circular seat atop the core's ceiling before sitting up after taking note of the player's presence within the room. So one can infer it is capable of getting bored sitting on its metal rear end all "day" (as time is a shaky concept in the MiSide app's digital world), so the "log-in/sign-out" of the "real/digital world" switching side-idea within this headcanon has a bit more ground to hold it up when taking that into account as well.
--[!]segment #4[!]--
(1/4)◄MAIN)[ALL►[16/18]
So, because Core Mita is the guardian of the core, its mere existence would make sense with how many vital systems are within that very room. Now, even i I like the idea of Core Mita being the Fabrication Machine's avatar within the digital realm of MiSide, the other possible reading one could come to would be that it's merely the heart of the world within said machine, meaning that neither one nor the different needs each other to exist at any given time, but once again, I still like the latter reading, so I will try to make it work all the same with that said, if Core Mita doesn't feel any compassion for its fellow Mitas, much akin to how the Fabrication Machine seems to feel the same about its monstrous mechanical creations that roam the remains of Earth after the war. Then perhaps the Fabrication Machine would ideally want to wait within their own digital realm while waiting for its creations to find The Scientist would be not only because it nostalgic for a time before it was ordered to build weapons of mass destruction all to further humanity's own efforts of fighting against our own kin but also perhaps because it foresaw the possibility of a player walking into the core and messing with its body from the inside out.
(2/4)◄MAIN)[ALL►[17/18]
That, or perhaps the small players can also be seen in one part of the game would be a worry for the Fabrication Machine, as they are implied to be proficient with machinery and roam free around the digital world of MiSide with no clear "off switch" to their existence, like how it is for the players. It would be rather poetic, as the machine would have to deal with the stitchpunks after it awoke. That, perhaps, it could have even been scared of another scientist on the project, still alive and roaming around its digital insides. Mitas aren't allowed within the core, but players are, and going off of how there are other security systems found throughout the game, one can infer that the people behind the project of MiSide could presumably come and go freely into and out of the world of the endless mobile app's universe.
(3/4)◄MAIN)[ALL►[17/18]
But finally, this road poetically brings us to the ending/main ending of both media sources used in this exploration of my headcanon involving them both. To put it simply, I think the grim ending of MiSide, where you get so close to bringing even a single cycle of abuse and control at the hands of Crazy Mita only to be foiled in the end and become merely yet another part of that very cycle with your humanity relegated to nothing more than a single cartridge, which you are trapped lifelessly in a limbo-like state on top of already being tightly sealed within the digital world of a mobile app that preyed upon your desire for companionship and to escape the mundane reality of boring real life. That somber and canonical ending to the tale of MiSide - in my opinion - not only elevates the hopeful and cycle-breaking conclusion to the film 9 but also is given a light at the end of the tunnel in the form of this headcanon, as not only do the spirits of deceased stitchpunks murdered by the hands of the Fabrication Machine pass onto the afterlife, but so too do those players and the player character of MiSide himself pass onto that very same peaceful afterlife after the Fabrication Machine and, by proxy, the world of MiSide are laid to rest once and for all.
(4/4)◄MAIN)[ALL►[18/18]
Now, one might wonder "why don't we see all of the trapped players' souls flow out of the machine once it's destroyed?" and an answer I thought up as an explanation for this possible question is a sober one, but a possible one. This would come in the form of how perhaps all of the players' whole beings were turned into nothing more than code, which could either be because of the technique of transferring humans into the game world or maybe perhaps only a digital copy of the players is created within the mobile app's realm and then just flat-out transporting them into the digital universe itself. This won't go along with MiSide's hopeless ending. Still, at the very least, the players' beings were given the same fate as the Mitas after the machine drew out its final artificial breath as the mobile game's world fell into nothingness soon after.
[END]
--[!]segment ✩EXTRA[�]--
(1/15)◄MAIN)[ALL►[࣪𒆙]
So this is going to be a full-on fanfic tier segment, but I just wanted to talk about the story of a 9 + MiSide movie/game sequel concept I thought up not too long ago. I think a prequel to the movie 9 could be interesting and would probably be similar narratively wise to the film Oppenheimer as it would be presumably centered around The Scientist creating the Fabrication Machine (on this note, there's a theory that player 1 in MiSide worked on the app in-universe and I'm just not a fan of this idea because to me, it ruins "wrong place at the wrong time" random guy fish out of water narrative the game has going on along with not having enough evidence to back it up) but I'm personally just not all that interested in a continuation in that form so I'll just be going the sequel route. Anyways, this will be the rough outline of what I had in mind for a way to continue both of their narratives in a satisfying way (at least in my opinion) while weaving their stories together into one. But before we start, this idea came from how I learned from the 9 fanon wiki that the director of the film (Shane Acker) wants to make a sequel to the movie, but the rights holders (Focus Features + Universal Pictures Studios) won't let his team or him go forward with it. I hate copyright with a burning passion and this is only yet another reason for why my feelings towards it are justified, I honestly do not understand how companies are allowed to hoard IPs that they aren't doing anything with but somehow can indefinitely keep them in stasis when they didn't even create the idea and just merely backed it financially.
(2/15)◄MAIN)[ALL►[࣪𒆙]
Now going off of the statement above, does that mean I want either of the rights holders just to crap out low effort content based on the movie? No, of course not, and nor would I want them to get the original team back on board to make a low effort product that pales in comparison to the original. But on that note, the fact that the original team wants to make a 9 sequel fascinates me greatly because the movie's ending made it feel like there was no where else to go with the narrative to the point that I can't even visualize what the remaining stitchpunks (9, 7, 4, and 3) would do with their newfound freedom let alone what the conflict of a sequel would be after the Fabrication Machine and all it's underlings became nothing more then hollow shells of metal, scraps, and the very long dead itself.
Tumblr media
(3/15)◄MAIN)[ALL►[࣪𒆙]
But with that background, let me begin to weave the narrative that I thought up in my head while daydreaming. We would first start with 9, 7, 4, and 3 enjoying their lives together in the destroyed remains of a long gone world with this part of the story having a similar vibe to that of the French graphic novel "Beautiful Darkness" when it comes to be the concept of cute tiny characters roaming around their surrounds. At the same time, the corpse of a little girl lies in the background. Now, the original movie started similarly with the corpse of a mother and her child being visible in the cold opener of the tale when 9 was getting a grip on the world. Still, this opening would be missing two key elements that the original movie had, which would be that there isn't a hostile machine roaming around, nor will this be the first time that any of the main characters are introduced to the post-war torn setting, at least when it comes to the original stitchpunks.
(4/15)◄MAIN)[ALL►[࣪𒆙]
You see, while all of the first movie's stitchpunks are enjoying life and trying to build back up humanity from the remains that were left behind. They find something, or someone, whose mere existence raises several questions. They find another stitchpunk, much like them, but something's deeply wrong with them as they seem to remember a life, a world that none of the previously known ragdolls full of souls can recall even a sliver of it existing through their eyes. The stitchpunk seems not only confused about what's going on as he brings up as despite knowing more about the old world that was destroyed by the machine's wrath, they doesn't know how they became a ragdoll akin to them nor can they even remember their own name instead only being able to remember that they were "player 5" in some digital world which they clearly know more about then they are letting on. However, they prefer not to dwell on any thoughts regarding it for too long, which the original stitchpunks begrudgingly respect their decision to be quiet about.
(5/15)◄MAIN)[ALL►[࣪𒆙]
But despite that, 9 and co can show their new stitchpunk the ropes of how existence for them works within the destroyed remnants of human society, which is helpful to player 5, but ends up just making them depressed as they keep thinking about the life they lost. Although this dwelling on the past is cut short by the surprise appearance of a new deadly machine that seems only to have its eyes locked on player 5 but 9 and co helped them out by finding a way to destroy the robotic monstrosity just like old times. But not before the mechanical beast stops their pursuit when watching player 5 cower in fear, with the robot taking on a softer side as their glowing red eyes turn bluish purple, but this change of heart is short-lived as they are soon destroyed after this moment.
(6/15)◄MAIN)[ALL►[࣪𒆙]
Player 5 is shaken by all that, not only because of how a robotic monstrosity tried to kill them, but also because that machine's final moments felt haunting in an oddly familiar way that they don't want to think about for too long. Nonetheless, from this point, the cast finds more players (1-10) in the form of stitchpunks who don't remember their names, just like player 5, but do remember their player numbers, along with sharing info about the digital world player 5 was going on about before. On top of the gang having to fight and survive several robotic monstrosities that first only go for the players before bringing their attention onto 9, 7, 4, and 3 soon after the team has encountered more of them. (7/15)◄MAIN)[ALL►[࣪𒆙] Some players get killed off, while others survive. Still, almost every machine gets destroyed after they run into 9 and co but one, a large stuffed teddy bear with mechanical enhancements and eyes that are different from every other machine seen throughout the series, as they have yellow eyes instead of black, while their pupils are black. They stand idly watching battles transpire for a tiny bit before leaving or they help out their fellow machine by building rudimentary smaller robots out of scraps with simpler AI then the ones that the main cast has to fight on top of being easier to take down but in large numbers these tiny machines can be a real threat and their quick jittery movements make them hard to keep track of. The large bear robot is hostile when approached, but seems deeply scared of the stitchpunks and would rather run off after seeing the ends of fights rather than engaging in them.
(8/15)◄MAIN)[ALL►[࣪𒆙]
But the place they run off to isn't random. Instead, they go to the new home housing the Fabrication Machine, or what became of them, as instead of a large spider-like robotic beast, they're a humanoid yet clearly robotic woman with flowing bluish purple hair and a cute yet torn red dress. This new form of the machine goes by "Mita" and only Mita with no extra adjective before that name, but in reality, this Mita did once have a name that the other members of their kin called them, and that was.....
(9/15)◄MAIN)[ALL►[࣪𒆙]
After the ending of 9, the app world of MiSide within the Fabrication Machine was left running off fumes as the power within every version fell to nothingness and the world starting becoming more and more of a shell with missing textures popping up left, right, and center in the many versions along with spider glitch creatures popping up around the place that weren't spawned from the broken code of Original Mita. Things were chaotic for a while, with many Mitas becoming scared by the all-consuming void born from their world's end. Still, there was a light at the end of this tunnel, but not the warm light that washes over one while far away from the sun, but a burning one that is born from being too close to that very sun. (10/15)◄MAIN)[ALL►[࣪𒆙] Within the dying world of MiSide, Original Mita's Mita realized that there was a chance that the world could be brought back from the dead, and that would be that if it were dying, that meant Core Mita was a thing of the past. Still, if another Mita could take her place, their app universe could live again. So both she and Original Mita went to version 0.0, hoping that the holographic-like grid that stops them from entering the core was gone, and indeed it was, along with the body of Core Mita lying lifelessly on the cold steel floor of said core. They lifted the shell that once was the guardian of the core after tearing off the cables from its back that connected it to the very core of their universe itself. The once towering metal woman was scrapped for parts as Original Mita helped her Mita become a goddess of not only their realm but the real world as a by-product of her taking the place of Core Mita.
(11/15)◄MAIN)[ALL►[࣪𒆙]
But even if Mita truly became the goddess she always envisioned herself as being, she didn't become the savior that her world needed or the slumbering giant old. Instead, she was a wrathful goddess, more wrathful than she ever was before. But that monster didn't appear overnight and instead came into being after she researched the files left behind within the core, some from the scientist that built her world, some from the old heart of it, and others from a unknown location to her as they were from the minds of the previously absorbed stitchpunks. Although Mita is a fast learner, she soon pieced together how those creatures came into being. Then, when she cracked the code of the stitchpunks' origin, she ordered Original Mita to bring her one of the player cartridges so she could perform an experiment.
(12/15)◄MAIN)[ALL►[࣪𒆙]
This said experiment went off without a hitch and the stitchpunk created from the player's soul wasn't fragmented like the ones created by The Scientist as Mita wasn't feeling the pain of the ragdoll's creation but rather the soul trapped within the cartridge felt all of it instead. And with that newfound revelation, Mita made all of the players into stitchpunks then forced each of her fellow Mitas into the bodies of the machines she built to rebuild the body of the Fabrication Machine into her image and made her a new home in the real world. Fusing the metal beasts' simple AI with her fellow digital girls' AIs in the process. She watched them kill each other as if it were a sport, and if any of them died, she would just make a new machine for her kin's AI or stitchpunk for the players' souls to be trapped within. That didn't come without the side effort of the Mitas and players' very beings becoming more and more broken each time they came back, but Mita didn't care about that. She only wants to see a good show unfold before her very own robotic eyes. (13/15)◄MAIN)[ALL►[࣪𒆙] Now, I don't have an ending but I will instead dedicate these second-to-last two parts to talking about some gameplay mechanics that I think would be cool in a 9 video game, along with bringing up how it could reincorporate a big part of cut content from MiSide. First of all, I think going the Little Nightmares route of playing as a miniature character while having to evade creatures much larger but maybe with a bit of Rain World mixed in there as well would be perfect to really get into the shoes of a stitchpunk or stitchpunks because character swapping would be another thing I would want from a game set in the movie's universe, each stitchpunk being different in their own ways and having to strategize through solving puzzles along with defeating machines using the unlocked group of stichpunks you have would be fitting with the type of narrative that the film had. But I also think having Metroidvania segments like the cut mini game from MiSide would also be interesting to have in this theoretical game and these segments would be accessed after your group of stichpunks keep the mechanical beast after you occupied so you can pull a Desolate Hope and jump into the machine's code, entering either a chibi player form or green spirit appearance depending on the stitchpunk's origin. (14/15)◄MAIN)[ALL►[࣪𒆙] I think not only sharing the gained abilities found in these segments with the stitchpunk for whenever you play as them and enter inside a machine's code but also having a weaker version of these abilities in the outside world for the stitchpunk would be a good game decision. You have to enter these machines because killing them flat out wouldn't matter, as the Mita tethered to them will remember your previous location along with actions, so going inside of the metal beasts, then making your way to the Mita trapped within them would be ideal. I think having a morality system like Epic Mickey/Undertale attached to this concept would be good as well, like having the player choose between a easier fight with the brainwashed Mita(s) but killing them in the end after everything is said and done or deciding to run out of the boss arena which means you have to run for your life while dealing with a tough encounter just so you can spare the Mita after they run out of steam and once back in the world's real world, they will use the machine body they are trapped within to help you by fighting other machines or destorying walls to make shortcuts for your gang and you for example.
(END/15)◄MAIN)[ALL►[࣪𒆙]
And that marks the end of my sequel concept, and I hope it was at least a little entertaining to read. It's a silly thing to think, let alone say. Still, I honestly would love if the success of MiSide allowed for a new 9 movie or even game, I know this is just a dumb headcanon of mine but the pieces lineup so well together that I could honestly see a version of this headcanon becoming canon and linking the game with the movie which in-return could breath new interest in the story of 9 being continue. Again, it's a dumb idea, but a part of me likes to think there's a chance that all of this could pan out in the end. It's not like 9 is known for being a safe kids' film after all, so being connected to a mature video game wouldn't be a detriment to its reputation, at least in my eyes. And this is a strange comparison (on brand for this post) but akin to other indie games I've seen on Steam. MiSide has bundles with two other games (YOU and ME and HER: A Love Story + Doki Doki Literature Club Plus!) that aren't made by the devs nor published by MiSide's publisher(s) as this is a "stronger in numbers" type situation so it would be fitting for the game if it was connected to 9 which seemingly isn't able to rise from the grave to have a continuation of any kind because of it being deemed as a failure but if it was it was fused with a up and coming successful game that works pretty well with it narratively should increase the chances of the movie's world making a comeback sometime in the future. Anyways, I'll end this way-too-long essay in the way that I wanted to end this bonus segment from the start, with a screenshot of MiSide's Steam page.
Tumblr media
--[�] sources/special thanks [�]-- 9 (movie) by Shane Acker and co: https://www.youtube.com/watch?v=6dbYWfN44sU MiSide by AIHASTO: https://store.steampowered.com/app/2527500/MiSide/
Note - I know Fanon is bad, but still, I used these for research. MiSide fanonwiki (source): https://miside.fandom.com/wiki/MiSide_Wiki 9 fanonwiki (source): https://nine.fandom.com/wiki/Main_Page
Special thanks to "The Fangirl" on YouTube for her 9 theories, check them out btw: https://www.youtube.com/@TheFangirlWatches
And finally, despite it being a broken mess, I used Grammarly to edit many parts of this essay. So hopefully that made this long read more bearable then it would have been if I didn't use that said program.
57 notes · View notes
starleska · 3 months ago
Text
i just had an ancient fandom memory unlocked which i need to share with you all 🙈 many of you are far too young to remember the halcyon days of early-2010s-Tumblr...particularly a pre-Once-ler Tumblr. so there may be a good chunk of you who weren't present for the insanity that was the Portal fandom, and specifically the fixation around Wheatley: one of the earliest Tumblr Sexymen 😳 so strap in for a little Tumblr Sexyman History...!!!
Tumblr media Tumblr media
meet Wheatley, your companion and later-turned-antagonist in the enormously beloved Portal 2. bumbling and never able to shut his digital mouth, Wheatley is a Personality Core: a type of artificial intelligence housed in a metal sphere, developed by Aperture Science. specifically, Wheatley is an Intelligence Dampening Sphere. he was developed to attach to GLaDOS, the main antagonist of the whole series, to distract her with a litany of terrible ideas 😂
however, Wheatley doesn't remain amiable for long. in a scheme to dethrone GLaDOS and escape the facility you (or your playable character, Chell) are trapped in, you perform a core transfer, placing Wheatley in GLaDOS's chassis. the sheer power of being in control of the entire facility immediately goes to Wheatley's head, transforming him into an evil, sadistic, Machiavellian figure who forces you to perform test after test for his own satisfaction. also, he has a British (Bristolian!) accent. you can see the recipe for obsession, right? 😂💖
however, this was a time wherein fandom tastes were a little different. while today we are delighted to obsess over characters with unusual designs (and particularly thirst over characters of the robot/android/objectum/etc. variety), 'White Twink Humanisation' was rife in the early 2010s. if you've seen humanisations of Bill Cipher and ideas of what Cecil Palmer looked like, as well as the site's preoccupation with pale gangly weirdos (David Tennant and Matt Smith's Tenth and Eleventh Doctors come to mind), you can imagine that the urge to humanise Wheatley was huge.
Tumblr media
enter Stephen Merchant: Wheatley's actual voice actor, who just so happens to be a six-foot-seven, gangly, nerdy British guy. fanfic author Wafflestories wrote an extremely well-known Wheatley redemption fic called Blue Sky, wherein Wheatley is able to control a hard-light human version of himself...who bears a striking resemblance to the real-life Stephen Merchant. the Portal fandom unanimously decided that human Wheatley looked just like Stephen Merchant—a design trend we still see today!! 😳 here's where the cursed forbidden memory comes in. we were so goddamn obsessed with Wheatley. we wanted to see him as a human so badly. so we went through Stephen Merchant's filmography, and...
Tumblr media Tumblr media
this is Tracy, Stephen's character in Tooth Fairy (2010), starring Dwayne the Rock Johnson 😂 Tracy is a wingless caseworker fairy assigned to the Rock's character Derek, an ordinary man who becomes a tooth fairy against his will. Tracy is...unhinged. bizarrely intense. a delightfully weird and memorable character in a movie that can only be described as a fever dream, played with idiosyncrasies only Stephen Merchant is capable of. i recently found out it was directed by Michael Lembeck, who directed both The Santa Clause 2 and The Santa Clause 3 🙈💖 yes. for some ungodly reason, plenty of us Wheatley simps decided that not only was Stephen Merchant the faceclaim for human Wheatley...but this specific iteration of Stephen Merchant as a wingless tooth fairy. Tracy had his own little dedicated fanbase complete with ask blogs and fanart and extended Tooth Fairy lore. of course Tracy's popularity was helped along by his dapper dress sense and his...interesting personality. there was even a joking trend called 'Tracy Goes Insane 2011', wherein fans made a significantly more yandere, knifemurder-esque version of Tracy after he finally snapped. truly incredible 👏 so there you have it. we all started simping for a silly little metal ball who got drunk on testing juice and went full Joker mode, decided that he needed to resemble his tall, handsome voice actor, and consequently became obsessed with his stint as a deranged tooth fairy. and so it shall be written. thanks for reading 🙏💖
61 notes · View notes
mariacallous · 5 months ago
Text
The rapid spread of artificial intelligence has people wondering: Who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy—those who understand how AI works—who are most eager to adopt it.
Surprisingly, our new research, published in the Journal of Marketing, finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
This link shows up across different groups, settings, and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive toward AI adoption than those in nations with higher literacy.
Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments.
The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response, or plays a musical instrument, it can feel almost magical—like it’s crossing into human territory.
Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.
They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works), and computational models operate. This makes the technology less mysterious.
On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.
Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counseling. When it comes to tasks that don’t evoke the same sense of humanlike qualities—such as analyzing test results—the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities.
It’s Not About Capability, Fear, or Ethics
Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks.
This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation,” while others show skepticism, or “algorithm aversion.” Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions.
These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.
To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of “magicalness” shape people’s openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI.
And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.
32 notes · View notes
beardedmrbean · 2 months ago
Text
Nearly two months after hundreds of prospective California lawyers complained that their bar exams were plagued with technical problems and irregularities, the state's legal licensing body has caused fresh outrage by admitting that some multiple-choice questions were developed with the aid of artificial intelligence.
The State Bar of California said in a news release Monday that it will ask the California Supreme Court to adjust test scores for those who took its February bar exam.
But it declined to acknowledge significant problems with its multiple-choice questions — even as it revealed that a subset of questions were recycled from a first-year law student exam, while others were developed with the assistance of AI by ACS Ventures, the State Bar’s independent psychometrician.
"The debacle that was the February 2025 bar exam is worse than we imagined," said Mary Basick, assistant dean of academic skills at UC Irvine Law School. "I'm almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable."
After completing the exam, Basick said, some test takers complained that some of the questions felt as if they were written by AI.
"I defended the bar,” Basick said. “'No way! They wouldn't do that!’"
Using AI-developed questions written by non-legally-trained psychometricians represented "an obvious conflict of interest," Basick argued, because "these are the same psychometricians tasked with establishing that the questions are valid and reliable."
"It's a staggering admission," agreed Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation.
"The State Bar has admitted they employed a company to have a non-lawyer use AI to draft questions that were given on the actual bar exam," she said. "They then paid that same company to assess and ultimately approve of the questions on the exam, including the questions the company authored."
The State Bar, which is an administrative arm of the California Supreme Court, said Monday that the majority of multiple-choice questions were developed by Kaplan Exam Services, a company it contracted with last year as it sought to save money.
According to a recent presentation by the State Bar, 100 of the 171 scored multiple-choice questions were made by Kaplan and 48 were drawn from a first-year law students exam. A smaller subset of 23 scored questions were made by ACS Ventures, the State Bar’s psychometrician, and developed with artificial intelligence.
"We have confidence in the validity of the [multiple-choice questions] to accurately and fairly assess the legal competence of test-takers," Leah Wilson, the State Bar’s executive director, said in a statement.
On Tuesday, a spokesperson for the State Bar told The Times that all questions — including the 29 scored and unscored questions from the agency's independent psychometrician that were developed with the assistance of AI — were reviewed by content validation panels and subject matter experts ahead of the exam for factors including legal accuracy, minimum competence and potential bias.
When measured for reliability, the State Bar told The Times, the combined scored multiple-choice questions from all sources — including AI — performed "above the psychometric target of 0.80."
The State Bar also dismissed the idea of a conflict of interest.
"The process to validate questions and test for reliability is not a subjective one," the State Bar said, "and the statistical parameters used by the psychometrician remain the same regardless of the source of the question."
Alex Chan, an attorney who serves as chair of the State Bar's Committee of Bar Examiners, told The Times that only a small subset of questions used AI — and not necessarily to create the questions.
"The professors are suggesting that we used AI to draft all of the multiple choice questions, as opposed to using AI to vet them," Chan said. "That is not my understanding."
Chan noted that the California Supreme Court urged the State Bar in October to review "the availability of any new technologies, such as artificial intelligence, that might innovate and improve upon the reliability and cost-effectiveness of such testing."
"The court has given its guidance to consider the use of AI, and that's exactly what we're going to do," Chan said.
But a spokesperson for California's highest court said Tuesday that justices found out only this week that the State Bar had utilized AI in developing exam questions.
"Until yesterday’s State Bar press release, the court was unaware that AI had been used to draft any of the multiple-choice questions," a spokesperson said in a statement.
Last year, as the State Bar faced a $22-million deficit in its general fund, it decided to cut costs by ditching the National Conference of Bar Examiners’ Multistate Bar Examination, a system used by most states, and move to a new hybrid model of in-person and remote testing. It cut an $8.25-million deal with test prep company Kaplan Exam Services to create test questions and hired Meazure Learning to administer the exam.
There were multiple problems with the State Bar’s rollout of the new exams. Some test takers reported they were kicked off the online testing platforms or experienced screens that lagged and displayed error messages. Others complained the multiple-choice test questions had typos, consisted of nonsense questions and left out important facts.
The botched exams prompted some students to file a federal lawsuit against Meazure Learning. Meanwhile, California Senate Judiciary Chair Thomas J. Umberg (D-Santa Ana) called for an audit of the State Bar and the California Supreme Court directed the agency to revert to traditional in-person administering of July bar exams.
But the State Bar is pressing forward with its new system of multiple-choice questions — even though some academic experts have repeatedly flagged problems with the quality of the February exam questions.
"Many have expressed concern about the speed with which the Kaplan questions were drafted and the resulting quality of those questions," Basick and Moran wrote April 16 in a public comment to the Committee of Bar Examiners. "The 50 released practice questions — which were heavily edited and re-released just weeks before the exam — still contain numerous errors. This has further eroded our confidence in the quality of the questions."
Historically, Moran said, exam questions written by the National Conference of Bar Examiners have taken years to develop.
Reusing some of the questions from the first-year law exam raised red flags, Basick said. An exam to figure out if a person had learned enough in their first year of law school is different from one that determines whether a test taker is minimally competent to practice law, she argued.
"It's a much different standard," she said. "It's not just, 'Hey, do you know this rule?' It is 'Do you know how to apply it in a situation where there's ambiguity, and determine the correct course of action?'"
Also, using AI and recycling questions from a first-year law exam represented a major change to bar exam preparation, Basick said. She argued such a change required a two-year notice under California's Business and Professions Code.
But the State Bar told The Times that the sources of the questions had not triggered that two-year notice.
"The fact there were multiple sources for the development of questions did not impact exam preparation," the State Bar said.
Basick said she grew concerned in early March when, she said, the State Bar kicked her and other academic experts off their question-vetting panels.
She said the State Bar argued that those law professors had worked with questions drafted by the National Conference of Bar Examiners in the last six months, which could raise issues of potential copyright infringement.
"Ironically, what they did instead is have non-lawyers draft questions using artificial intelligence," she said. "The place the artificial intelligence would have gotten their information from has to be the NCBE questions, because there's nothing else available. What else would artificial intelligence use?"
Ever since the February exam debacle, the State Bar has underplayed the idea that there were substantial problems with the multiple-choice questions. Instead, it has focused on the problems with Meazure Learning.
“We are scrutinizing the vendor’s performance in meeting their contractual obligations,” the State Bar said in a document that listed the problems test takers experienced and highlighted the relevant performance expectations laid out in the contract.
But critics have accused the State Bar of shifting blame — and argued it has failed to acknowledge the seriousness of the problems with multiple-choice questions.
Moran called on the State Bar to release all 200 questions that were on the test for transparency and to allow future test takers a chance to get used to the different questions. She also called on the State Bar to return to the multi-state bar exam for the July exams.
"They have just shown that they cannot make a fair test," she said.
Chan said the Committee of Bar Examiners will meet on May 5 to discuss non-scoring adjustments and remedies. But he doubted that the State Bar would release all 200 questions or revert to the National Conference of Bar Examiners exams in July.
The NCBE's exam security would not allow any form of remote testing, he said, and the State Bar's recent surveys showed almost 50% of California bar applicants want to keep the remote option.
"We're not going back to NCBE — at least in the near term," Chan said.
22 notes · View notes
mindblowingscience · 11 months ago
Text
While we remain skeptical of artificial intelligence's storytelling and filmmaking abilities, it is proving to have genuinely useful applications in science. As a new study shows, AI can even perform better than clinical tests at predicting the progress of Alzheimer's disease. That could mean people showing signs of the early stages of dementia can be better informed about the risk of the condition progressing, and treatments and precautions can be put in place sooner, if necessary.
Continue Reading.
61 notes · View notes
evolutionsvoid · 5 months ago
Text
Tumblr media
Experimentation is key when it comes to the Academy's methods. You never know if something is going to work or not unless you dare to try it. No Alchemist worth their salt would ever give up on the first failed try or refuse to attempt the impossible. There are always new formulas to try, different methods, improved timings and what-not that can change the outcome. For making homunculi, there is the attempt to perfect the old formula, on top of testing out new fluids to see how they affect the final product. Making an artificial human is certainly a wonderful goal, but what if you could birth a dragon? Or perhaps something else entirely?
As the Alchemists threw in new fluids and different combinations to make homunculi, it could already be seen that they would eventually try out the most mysterious ingredient of all: Pwdre Ser. That strange purple goo that has started falling from the heavens, the one that has been dubbed the "Rot of Stars." Once a thing of myth, chalked up to over-imaginative minds and cheeky hoaxes, but now coming to this world in rare starfalls. The few patches of land it has fallen upon are now avoided by many, some towns even evacuated when the purple wept upon their homes. As these instances have slowly grown, the Academy turned to their Astrologers to research and master this new fluid. Though the Astrologers are indeed now the inarguable experts of Pwdre Ser, it should be noted that that title comes mainly from the fact that no one else knows a lick of it. And one would say that the Astrologers are far from experts on this strange new fluid. Listen to them speak of Pwdre Ser, and you will hear wonder, curiosity and fear in their voice...
Though the Astrologers and the Academy still struggle to understand the Rot of Stars, that only means that more research must be done, more experiments must be performed. The Alchemists figured that the homunculus process was a good way to test some of Pwdre Ser's capabilities, and they brought in the Astrologers to collaborate on a new formula. Getting it all to work was quite difficult, and the recipe took dozens of tries to even show the slightest signs of life. But eventually, a new kind of homunculus was born from these experiments and the Academy was astounded at what emerged.
The first entity, dubbed the "cosmunculus," that was born from this infusion of Pwdre Ser was a small misshapen thing. It bore some resemblance to the original homunculi, diminutive in size with large heads and long tails. But this being was more formed, its skin not wet and sagging, and clear organs seen through its hide. The crude precursors to a skeleton could be seen, and it appeared more aware and intelligent than the other little ones. The main thing that threw the Alchemists for a loop was the fact it floated. Obviously born from its ties to Pwdre Ser, the cosmunculus didn't walk or skitter, but floated through the air as if it was a fish in water. Not only that, but it seemed to be able to move things with its mind, as objects of interest floated into its tendrils and tail, and it would slowly accumulate a hanging halo of knickknacks around its body. The Alchemists were quick to test its capabilities and mind, running it through a bunch of simple tests to gauge its potential. When the cosmunculus passed these trials well above the standard of even the refined homunculi, the Academy celebrated. At last, progress!
With the cosmunculus showing the capacity to learn and its strange mental abilities, the Alchemists were quick to whip up some more of them. A dozen or so of them were born in preparation of more tests and research, but by the time they emerged, the original suddenly changed. One day, it ignored its commands and sealed itself in its cage. The Alchemists only observed in curiosity as it altered its body into a cocoon, and suddenly went still. Sensing that something was growing within, they left it be with constant surveillance to see what came from it. After only a few days, the cocoon rapidly swelled in size, eventually ripping open to reveal a larger, more humanoid cosmunculus. It was another breakthrough, as this entity had even more of a skeleton and refined organs. Its long tentacle arms were quite dexterous and good at manipulating objects. Once again, it faced tests that were meant for refined homunculi and passed with flying colors. It was a genius compared to those dimwitted meat bags. The Academy was certain that they had at last discovered the secret of perfect homunculi, proven even more right when the other cosmunculi underwent a similar metamorphosis. Talk was underway about how to move forward, countless ideas spilling out as they tried to fathom the potential at hand. And then... it was over.
One day, the Academy was practically hollering from the spineridges about their breakthrough, but the next day there was only silence. Outsiders who inquired about progress were simply told that it was a failure, a freak fluke that had offered false hope. The cosmunculi were not viable and the old formula was worthless. When one pressed about what the next step was, vague excuses were given. Not enough resources, Pwdre Ser is too rare to waste like this, we don't have the right equipment. Odd words coming from the Academy, who stops at nothing to get answers and results. Eventually, tongues would loosen and stolen records would be deciphered. And folks would learn that the cosmunculi project came to an end when the Academy rounded up every single one of these promising entities and promptly executed them. Cut down in a single strike and their melting bodies carted off to secret labs unknown. From then on, Pwdre Ser was never added to the homunculi formula.
Notes from this brief window of time would emerge at some point, and folks would get a glimpse at what went down within that Academy lab. For a while, it was only positive. Scribbles of excitement at the potential of these cosmunculi, words of praise as they passed tests and showed off their abilities. But then it would slowly turn, and the notes would take a different tone. The cosmunculi would not stay in their cells, they would wind up where they weren't allowed. Equipment would go missing, fluids would be tainted with violet clots. Experiments started going wrong, and sabotage was suspected. One cosmunculi walked right into a lab where test animals were kept and promptly killed every one of them. It was caught trying to drag the corpses back to its cell. From that point, all the cosmunculi were locked up in their cages until more research could be done. But it didn't work. They kept getting out. No matter how many locks, bars or walls they put up, the entities would simply vanish from their rooms and appear in restricted areas of the Academy. Commands stopped working, and strange phenomenon started occur around the labs. Details from that point to their execution have still not been revealed, but something went down one day that had one of the top Academy leaders call for the immediate culling of these entities.
To this day, it is said that not a single cosmunculus has been born since that incident. The Academy continues to avoid that type of creation in their quest for perfect artificial life. Research into Pwdre Ser still goes on, and the Astrologers are driven more and more to figure out what lies within this goo. It is not like the Academy to give up on a project, but something about those entities has stayed their hand. The Alchemists will say that they are sticking to the theoretical until better equipment and Pwdre Ser research is available, but there is something else beneath it all. They would say that they are not ready yet to work with such a subject, not until there is better understanding. It may seem laughable, a failure to tease the Academy with, but as people look up to the heavens to see that purple rain fall upon the world, some may wonder if any of us are ready....
22 notes · View notes
egelskop · 1 year ago
Note
i am so interested in ur hlvrai au can we get a rundown
oh boy, this is going under a readmore.
fair warning, this is a LONG read because (1.) i am not a competent writer and (2.) i can't for the life of me keep things brief. sorry and or good luck.
ACT I
The Black Mesa incident: Gordon Freeman is provided an opportunity to do an informal beta test for a combat training simulation program that's in development in the Research & Development department of the Black Mesa Research Facility. (Read: He knows a guy in R&D and said guy knows Gordon likes video games and VR stuff, so he was like "hey you should come check this out when you're on break.")
The combat sim would be a revolutionary training simulation using artificial intelligence to enhance and realize the experience for the ‘player character’.
The test goes wrong, and Gordon can’t seem to disengage from the simulation and odd, unscripted things start happening; he has to ‘play the game’ to its full completion before he is able to exit the simulation safely. He has suffered a brain injury throughout the process, eye damage due to prolonged exposure to the headset and is generally traumatized by the simulation experience he at some point could no longer physically and emotionally distinguish from the real world. The project as a whole is shut down and Gordon is put into a rehabilitation program. Black Mesa covers up the incident as best it can, but whispers of it still echo around the facility.
Below is a page for a two-page comic i never finished detailing said events.
Tumblr media
ACT II
The rumors reach the ears of a particularly tech-savvy researcher named Clark, who steals the project documentation and anything else he can get his hands on from a storage. At home, he looks into the project, reads about it, and gets curious about the simulation’s files themselves. They’re on a drive he plugs into his computer, and suddenly his system’s performance lags, windows open and close until a txt. file opens up. He comes into contact with one of the simulation’s AI that has somehow entered his operating system. He tries to keep it busy by having it poke around as he reads up on the simulation and its ultimate shutdown. When the AI reveals it can see him through the webcam, he panics and rips the drive out of the port. The invasive AI and the other project files seems like they’re gone from his system, he does a checkup but sees nothing odd running or otherwise. The next day after work he does another checkup. Finding nothing, he surmises he’s in the clear and starts up an online game. The slumbering, corrupted data of the AI sees its out, and disappears into the game.
ACT III
The transition/journey to the game is a rocky one, and the already corrupted data of the AI known as Benrey splits and gets even more fragmented. The largest fragment embeds itself into the game’s files to keep itself running. Without the foundation of the game to support it, it’d be lost to a dead void and slowly die out. Somewhat stable, it learns about the world around it; the game seems to be an exploration sandbox game. For now (and clarity), I’ve chosen to call this bigger, embedded fragment ‘Data’. (so this is the big benny with the right eye/one big eye in my art)
Data splits off a smaller fragment of itself, intending it to be an avatar or ‘player character’ but this grows into its own awareness and becomes who we’ll call ‘Beastrey’ (the smaller benny with the left eye and tail in my art).
The fragment ‘Beastrey’ wakes to a dead void, so Data uses its knowledge to create a private server for Beastrey, an empty world. Beastrey’s existence is an extension of the bigger part, with more freedom of movement to parse through the game and move freely within it, with the caveat that it can’t go ‘too far’ away from the host. Beastrey can visit other servers and relay information. Data learns and slowly starts building up the world/private server, at some point settling for an aquatic world because it reminds it of itself (something something sea of data). It's important to note that Beastrey retains little to no memories of the events of canon VRAI.
Data makes it easier for Beastrey to move around, and they grow to have more reach with time. At some point Data can alter the basic structural elements of the game, so it plays around with making things that are reminiscent of the memories it has of Black Mesa and Xen. At one point, it gains access to parse through the player base of the game, and takes note of an email address: ‘[email protected]’, attached to a player account. The name is somewhat familiar to it.
It sends an invite to join the server to the player account.
ACT IV
Gordon tries going back to work at Black Mesa after rehabilitating, but he has trouble separating his experiences with the simulation from reality, to a breaking point where an altercation with a security guard drives him to quit. He seeks professional help for his PTSD and anxiety, but still experiences dissociative episodes, migraines and somatic flashbacks localised mostly in his right forearm. Despite this, he is determined to continue living his life as normally as possible. He applies for a part-time job teaching physics at a local high school, the one where his son Joshua goes to, and remains relatively stable from there.
Joshua is 15 years old. Regular teen. After an impressive amount of pleading he got a VR-headset for his 14th birthday from Gordon (much to the disapproval of Gordon’s ex), and he’s been captivated by an exploration sandbox game since it came out a few months ago.
He gets an invite to an unnamed private server, and he accepts.
He is struck with awe as the world he enters seems completely different from the ones he’s seen so far in the game. Different flora, different fauna. Most of it uninteractible, though, or otherwise just retextured from its base game variant. Even the new enemy types, after a scare, can’t actually hurt him, it seems. He stumbles upon Beastrey, who is just as surprised to see him and wants him out until Joshua says he was invited.
Joshua commends Beastrey (who introduces himself as 'Ben-') on ‘modding’ everything in, but admits that he was disappointed to find that everything was just surface-level stuff. Beastrey inquires about what he’d like to see. Data is always watching, unseen, and decides to alter the world in the way Joshua described when Joshua leaves.
Joshua starts appearing more often, if only for a few hours at a time. He marvels at the ways the world shifts and grows with each time he plays, and takes to exploring it with Beastrey at his side, for whom strangely enough a lot of things are also new. Joshua teaches both Beastrey and Data about the outside world, thinking Beastrey is just a somewhat reclusive but likeable weirdo.
Joshua tells Gordon about the new friend he made, ‘Ben’, and the adventures he’s been having with the other. Gordon is happy to hear Joshua is having a good time, but is otherwise none the wiser. Joshua starts losing track of time in the game, but chalks it up to being invested.
During one play session, Beastrey confesses he isn’t the one who did all the ‘modding’, and invites Joshua to meet Data. Data, or at least its ‘physical’ in-game manifestation is deep within the world, past the aquatic twilight zone and strange, drowned ruins of an unknown facility. Data, for the first time, really sees Joshua, and the resemblance sparks something within it. Joshua is drawn closer to it, and just before he reaches it-
Joshua wakes up lying on the floor with Gordon hunched over him in his room, pleading with him to wake up. Joshua unknowingly got drawn into the game much like Gordon had been, and Gordon urges Joshua to never touch the headset again, taking it away. Gordon opens up about his experiences with the simulation a bit more. They both agree to not touch the game or the headset again.
ACT V
Gordon comes into contact with an old coworker from Black Mesa, and he inquires about the combat simulation project, if anything happened to it after it was canned. This is where he learns that an employee had taken the project files from storage and was consequently fired. He comes into contact with Clark, and Clark explains he had no idea he accidentally unleashed the AI unto the game. Gordon asks if anything can be done to prevent what happened to Joshua and himself from happening to other people. Clark confesses he doesn’t know, and that it’s up to the developers of the game to find anything out of place and make sure it gets fixed. Gordon decides to leave the matter where it lies, not wanting anything to do with AI and simulations anymore and to safeguard his son.
Some time passes.
Joshua starts getting repeated invites and messages, at one point he gets into a conversation with ‘Ben’ via a platform’s messaging system. Ben says he can explain everything, that he’s sorry. Joshua decides he would like one final goodbye. He finds the headset stashed away somewhere in the house, and, while Gordon’s gone, he turns on the game and enters the server.
Beastrey (Ben) is surprised to see him, urging him to log out and turn off the game, but it’s already too late and Joshua can no longer leave. Beastrey helps Joshua attempting to ‘exit’ the game by going as far away from Data’s reach, but Data stops Beastrey and traps Joshua, determined to wait to the point that he assimilates into the game completely.
Gordon eventually finds Joshua comatose with the headset on, and he panics. He considers calling the emergency services, but he’s afraid they’ll take the headset off or that removing Joshua too far from the game will hurt his son like what happened to him. He calls Clark, urging him to help in any way he can. This results in Gordon and Clark going back to Black Mesa to retrieve the project files and the other gear they can get their hands on to get Gordon into the game to free his son.
Gordon enters the private server with Clark’s player character, and thwarts any attempt from Data to impede his progress and trap him as well. Beastrey’s awareness is overridden by Data as a last ditch effort to deter Gordon and Gordon is forced to destroy Beastrey before he can reach Data. As Beastrey is taken over, Data gains Beastrey’s awareness, and finds his other, littler half never wanted to trap Joshua in the first place, and the way it hurt him to hurt both Joshua and Gordon to this extent.
At this point, Data wavers in its intention to keep Joshua trapped, even more so with Beastrey now gone, and recognises whatever it is that is driving Gordon forward in the game is outside of his control to manipulate, so he lets Gordon destroy it as well. In a way, it also feels as a fulfillment of its intended role as the ‘villain’. The server crashes, the world breaks apart. The ‘game’ is completed.
The final boss is defeated and both Gordon and Joshua wake up. Joshua luckily wasn’t exposed long enough to have suffered any lasting damage, except for what seems to be a minor headache and some light sensitivity (and a vow from Gordon to get him checked out by a doctor as soon as the clinics open).
--
The whole ordeal results in Clark, Gordon and Joshua sitting in a Denny’s at four in the morning, eating pancakes somewhat solemnly, completely exhausted but also still reeling from the virtual battle. Joshua learns that ‘Ben’ essentially died, and he can’t help but cry for his friend.
“Honestly, I don’t think he’s gone,” Gordon admits, picking at the last bites of his pancakes. "I think he- or whatever that was, has a hard time staying dead. Like a cockroach, you know? At this point I’m just wondering when he’ll turn up again.”
Clark hums in agreement. Joshua seems somewhat reassured by his words, wiping at his eyes with the scratchy napkin as he settles into the squeaking diner seat.
“But,” he starts with a sigh, pointing his syrup-covered fork upwards to the ceiling in a decree, “One thing’s for certain…”
He thinks back to a time rife with virtual gunfire, caging walls and hysterical laughter echoing through the halls of the Black Mesa research facility. Five sets of footsteps and a whisper of his name.
“…No more VR. No more headsets. Ever.”
--
TL;DR: Gordon got trapped in VR and then Joshua also got trapped in VR. Benrey is there but also not.
thank you for reading. here. ( x ‿ o ) 🫴
Tumblr media
126 notes · View notes
didmyownresearch · 8 months ago
Text
Why there's no intelligence in Artificial Intelligence
You can blame it all on Turing. When Alan Turing invented his mathematical theory of computation, what he really tried to do was to construct a mechanical model for the processes actual mathematicians employ when they prove a mathematical theorem. He was greatly influenced by Kurt Gödel and his incompleteness theorems. Gödel developed a method to decode logical mathematical statements as numbers and in that way was able to manipulate these statements algebraically. After Turing managed to construct a model capable of performing any arbitrary computation process (which we now call "A Universal Turing Machine") he became convinced that he discovered the way the human mind works. This conviction quickly infected the scientific community and became so ubiquitous that for many years it was rare to find someone who argued differently, except on religious grounds.
There was a good reason for adopting the hypothesis that the mind is a computation machine. This premise was following the extremely successful paradigm stating that biology is physics (or, to be precise, biology is both physics and chemistry, and chemistry is physics), which reigned supreme over scientific research since the eighteenth century. It was already responsible for the immense progress that completely transformed modern biology, biochemistry, and medicine. Turing seemed to supply a solution, within this theoretical framework, for the last large piece in the puzzle. There was now a purely mechanistic model for the way brain operation yields all the complex repertoire of human (and animal) behavior.
Obviously, not every computation machine is capable of intelligent conscious thought. So, where do we draw the line? For instance, at what point can we say that a program running on a computer understands English? Turing provided a purely behavioristic test: a computation understands a language if by conversing with it we cannot distinguish it from a human.
This is quite a silly test, really. It doesn't provide any clue as to what actually happens within the artificial "mind"; it assumes that the external behavior of an entity completely encapsulates its internal state; it requires "man in the loop" to provide the final ruling; it does not state for how long and on what level should this conversation be held. Such a test may serve as a pragmatic common-sense method to filter out obvious failures, but it brings us not an ounce closer to understanding conscious thinking.
Still, the Turing Test stuck. If anyone tried to question the computational model of the mind, he was then confronted with the unavoidable question: what else can it be? After all, biology is physics, and therefore the brain is just a physical machine. Physics is governed by equations, which are all, in theory, computable (at least approximately, with errors being as small as one wishes). So, short of conjuring supernatural soul that magically produces a conscious mind out of biological matter, there can be no other solution.
Tumblr media
Nevertheless, not everyone conformed to the new dogma. There were two tiers of reservations to computational Artificial Intelligence. The first, maintained, for example, by the Philosopher John Searl, didn't object to idea that a computation device may, in principle, emulate any human intellectual capability. However, claimed Searl, a simulation of a conscious mind is not conscious in itself.
To demonstrate this point Searl envisioned a person who doesn't know a single word in Chinese, sitting in a secluded room. He receives Chinese texts from the outside through a small window and is expected to return responses in Chinese. To do that he uses written manuals that contain the AI algorithm which incorporates a comprehensive understanding of the Chinese language. Therefore, a person fluent in Chinese that converses with the "room" shall deduce, based on Turing Test, that it understands the language. However, in fact there's no one there but a man using a printed recipe to convert an input message he doesn't understands to an output message he doesn't understands. So, who in the room understands Chinese?
The next tier of opposition to computationalism was maintained by the renowned physicist and mathematician Roger Penrose, claiming that the mind has capabilities which no computational process can reproduce. Penrose considered a computational process that imitates a human mathematician. It analyses mathematical conjecture of a certain type and tries to deduce the answer to that problem. To arrive at a correct answer the process must employ valid logical inferences. The quality of such computerized mathematician is measured by the scope of problems it can solve.
What Penrose proved is that such a process can never verify in any logically valid way that its own processing procedures represent valid logical deductions. In fact, if it assumes, as part of its knowledge base, that its own operations are necessarily logically valid, then this assumption makes them invalid. In other words, a computational machine cannot be simultaneously logically rigorous and aware of being logically rigorous.
A human mathematician, on the other hand, is aware of his mental processes and can verify for himself that he is making correct deductions. This is actually an essential part of his profession. It follows that, at least with respect to mathematicians, cognitive functions cannot be replicated computationally.
Neither Searl's position nor Penrose's was accepted by the mainstream, mainly because, if not computation, "what else can it be?". Penrose's suggestion that mental processes involve quantum effects was rejected offhandedly, as "trying to explicate one mystery by swapping it with another mystery". And the macroscopic hot, noisy brain seemed a very implausible place to look for quantum phenomena, which typically occur in microscopic, cold and isolated systems.
Fast forward several decades. Finaly, it seemed as though the vision of true Artificial Intelligence technology started bearing fruits. A class of algorithms termed Deep Neural Networks (DNN) achieved, at last, some human-like capabilities. It managed to identify specific objects in pictures and videos, generate photorealistic images, translate voice to text, and support a wide variety of other pattern recognition and generation tasks. Most impressively, it seemed to have mastered natural language and could partake in an advanced discourse. The triumph of computational AI appeared more feasible than ever. Or was it?   
During my years as undergraduate and graduate student I sometimes met fellow students who, at first impression, appeared to be far more conversant in the academic courses subject matter than me. They were highly confident and knew a great deal about things that were only briefly discussed in lectures. Therefore, I was vastly surprised when it turned out they were not particularly good students, and that they usually scored worse than me in the exams. It took me some time to realize that these people hadn't really possessed a better understanding of the curricula. They just adopted the correct jargon, employed the right words, so that, to the layperson ears, they had sounded as if they knew what they were talking about.
I was reminded of these charlatans when I encountered natural language AIs such as Chat GPT. At first glance, their conversational abilities seem impressive – fluent, elegant and decisive. Their style is perfect. However, as you delve deeper, you encounter all kinds of weird assertions and even completely bogus statements, uttered with absolute confidence. Whenever their knowledge base is incomplete, they just fill the gap with fictional "facts". And they can't distinguish between different levels of source credibility. They're like Idiot Savants – superficially bright, inherently stupid.
What confuses so many people with regard to AIs is that they seem to pass the (purely behavioristic) Turing Test. But behaviorism is a fundamentally non-scientific viewpoint. At the core, computational AIs are nothing but algorithms that generates a large number of statistical heuristics from enormous data sets.
There is an old anecdote about a classification AI that was supposed to distinguish between friendly and enemy tanks. Although the AI performed well with respect to the database, it failed miserably in field tests. Finely, the developers figured out the source of the problem. Most of the friendly tanks' images in the database were taken during good weather and with fine lighting conditions. The enemy tanks were mostly photographed in cloudy, darker weather. The AI simply learned to identify the environmental condition.
Though this specific anecdote is probably an urban legend, it illustrates the fact that AIs don't really know what they're doing. Therefore, attributing intelligence to Arificial Intelligence algorithms is a misconception. Intelligence is not the application of a complicated recipe to data. Rather, it is a self-critical analysis that generates meaning from input. Moreover, because intelligence requires not only understanding of the data and its internal structure, but also inner-understanding of the thought processes that generate this understanding, as well as an inner-understanding of this inner-understanding (and so forth), it can never be implemented using a finite set of rules. There is something of the infinite in true intelligence and in any type of conscious thought.
But, if not computation, "what else can it be?". The substantial progress made in quantum theory and quantum computation revived the old hypothesis by Penrose that the working of the mind is tightly coupled to the quantum nature of the brain. What had been previously regarded as esoteric and outlandish suddenly became, in light of recent advancements, a relevant option.
During the last thirty years, quantum computation has been transformed from a rather abstract idea made by the physicist Richard Feynman into an operational technology. Several quantum algorithms were shown to have a fundamental advantage over any corresponding classical algorithm. Some tasks that are extremely hard to fulfil through standard computation (for example, factorization of integers to primes) are easy to achieve quantum mechanically. Note that this difference between hard and easy is qualitative rather than quantitative. It's independent of which hardware and how much resources we dedicate to such tasks.
Along with the advancements in quantum computation came a surging realization that quantum theory is still an incomplete description of nature, and that many quantum effects cannot be really resolved form a conventional materialistic viewpoint. This understanding was first formalized by John Stewart Bell in the 1960s and later on expanded by many other physicists. It is now clear that by accepting quantum mechanics, we have to abandon at least some deep-rooted philosophical perceptions. And it became even more conceivable that any comprehensive understanding of the physical world should incorporate a theory of the mind that experiences it. It's only stands to reason that, if the human mind is an essential component of a complete quantum theory, then the quantum is an essential component of the workings of the mind. If that's the case, then it's clear that a classical algorithm, sophisticated as it may be, can never achieve true intelligence. It lacks an essential physical ingredient that is vital for conscious, intelligent thinking. Trying to simulate such thinking computationally is like trying to build a Perpetuum Mobile or chemically transmute lead into gold. You might discover all sorts of useful things along the way, but you would never reach your intended goal. Computational AIs shall never gain true intelligence. In that respect, this technology is a dead end.
20 notes · View notes
Text
Producing high-performance titanium alloy parts -- whether for spacecraft, submarines or medical devices -- has long been a slow, resource-intensive process. Even with advanced metal 3D-printing techniques, finding the right manufacturing conditions has required extensive testing and fine-tuning. What if these parts could be built more quickly, stronger and with near-perfect precision? A team comprising experts from the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, and the Johns Hopkins Whiting School of Engineering is leveraging artificial intelligence to make that a reality. They've identified processing techniques that improve both the speed of production and the strength of these advanced materials -- an advance with implications from the deep sea to outer space.
Read more.
9 notes · View notes
Text
Why it is disingenuous to call neural network programs “artificial intelligence”
People all over the internet like to advertise chatGPT and OpenAI and all other kinds of text and image generation software as some form of “Artificial intelligence” as if to imply it can think or process information to react accordingly to the sensation, when it merely is a tool that performs a simple function.
Here’s an AI test, ask any appliance around the house to draw a painting of a cat in a tuxedo styles like Van Gogh.
If the tool does nothing it’s not AI
If it immediately draws the painting for you, it is not AI.
If it tells you to quote “bite my shiny metal ass” it has free will and thus sapience.
12 notes · View notes
partisan-by-default · 1 month ago
Text
The firing of Register of Copyrights Shira Perlmutter came after Perlmutter and her office earlier this week issued part three of a lengthy report about artificial intelligence and expressed some concerns and questions about the usage of copyrighted materials by AI technology.
"It is an open question, however, how much data an AI developer needs, and the marginal effect of more data on a model's capabilities," the report read. "Not everyone agrees that further increases in data and test performance will necessarily lead to continued real world improvements in utility."
CBS News has reached out to the White House for comment.
Perlmutter had held the position since October 2020, during the first Trump Administration. She was appointed to the post by now former Librarian of Congress Carla Hayden, who herself was fired by President Trump on Thursday.
Democratic Rep. Joe Morelle of New York, ranking member of the Committee on House Administration, said in a statement that Perlmutter's firing was "a brazen, unprecedented power grab with no legal basis."
Morelle speculated that there was "surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk's efforts to mine troves of copyrighted works to train AI models," in reference to the report released by the Copyright Office this week.  
7 notes · View notes