#which only robot/AI/artificial servants have
Explore tagged Tumblr posts
Text

this was x1000000000 times funnier in my head im so so sorry
#fate#fate extra#fgo#gilgamesh#gilgamesh archer#hakuno kishinami#enkidu#gilhaku#hakugil#gilkidu#gilhakukidu#ot3#the pun here is that Hakuno has the mechanical trait#which only robot/AI/artificial servants have#enkidu isnt a robot technically but theyre a weapon so im counting them as one too lol#my art
243 notes
·
View notes
Text
Let's also dial this hostility back a bit and look at the bigger picture into which this is taking place, while also looking at more modern issues.
The entire psychology behind mascots & marketing is building upon a facsimile of interdependent trust with a corporation or product as if it were a recognizable friend. Consumerism is DESIGNED for this to be a totally normal thing that no one so much as stops to think about, and it is not even REMOTELY a new thing.
We've always been taught empathy towards non-human beings who treat us with respect. Data in Star Trek is probably the most immediately obvious example, but this even extends to the droids in Star Wars as well as those placed in defacto positions of servitude have respect because we understand that position as a reflection of our own struggles within that type of inescapable system.
Hell, even the WORD "robot" just literally comes from the word for "slave" so there's an absolute mess of complication around all those things, even without getting into how even the relationship between drives being master/slave got ethically evaluated. Suffice to say, there's a lot of reason for sensitivity around how you present something like this.
The marketing isn't any different than you'd expect to see if this were ACTUALLY a humanoid robot servant you could see, which is a common moral quandary in fiction. This one is only a voice, but this just the same fundamental pattern that's ubiquitous to all marketing, and if anything — treating a human-like thing as utilitarian objects would be an even more disturbing approach, but also less likely for people to use because of how we're socially inclined to act.
Stack that up with how Speech-to-Text and other voice services being normalized are an absolutely MASSIVE part in making a lot of forms of previously a11y-specific technology largely ubiquitous. There's a massive benefit in marketing any number of speech recognition services as a normal, comfortable thing that's just like talking to a regular person, as software like Siri have a marked impact for elderly and the visually impaired, and there's a lower barrier to entry to understand how to use it.
However this intersects with something else:
Late-stage capitalism has pushed to the point where isolation & loneliness are at an all-time high, and rather than addressing the issues that an unsustainable economy predicated upon sucking away every waking hour people have, it's also amplifying overdependence upon parasocial and artificial crutches as a replacement for the thing it's taken away.
At its core, being kind to your Alexa is just like being amicable towards the GPS in your car or talking to your stuffed animals. Anything else that presents itself as a human facsimile — we reflect our normal social behaviours towards it. That gets even more amplified when the limited social contact people have is with people who mistreat them, and all they're looking for is a simulation of the same kindness that they're attempting to give.
The actual issue here is that there is a massively growing demographic who has a lack of meaningful connections with other people, and an increased level of stress at the same time which amplifies tensions to complicate forming those connections with or to others. Technology is often an extremely meaningful stepping stone to healing from that — and is massively vulnerable to exploitation.
Which brings us to… modern AI doing that.
Everything with Alexa's marketing falls into a natural intersection where it's easy for it to be uncomfortable no matter how you approach it, but it isn't necessarily any more or less nefarious than any other piece of marketing. But AI chatbots are the single most predatory system out there because it takes everything about that faux psuedo-familial relationship and projects it into something that is MEANT to make you attach to it and pay for that as a service.
I cannot tell you the number of "this AI Chatbot will send you NSFW pictures and flirt with you" ads that there are out there, where it's designed explicitly to prey upon people who are lonely and learn how to better monetize those people and keep them attached to the app. As if that wasn't bad enough, there are even ones where it's designed to mimic actual people and become an alternative version of them for you:
This is one I stumbled across the other day:
While I understand the core sentiment about Alexa, I think that the reality of that situation is exponentially more complicated and far from being cut-and-dry as one way or another. The sentiments expressed by the people in the replies are ones that deserve compassion rather than ridicule, because that's why there's leverage in those things in the first place.
However, I think that that overall sentiment of loathing towards the coldly exploitive manipulation of a psuedo-familial bond by the mechanisms of late-stage captilaism is undeniably justified. Even moreso when looking at just how sickeningly exploitive ai tech companies are with these same things, and how much worse that is in feeding off a feedback loop that only harms people.
i hate how they market alexa as a ‘member of the family’ like that’s SO fucking blatantly insidious and terrifying also if i wanted an untrustworthy/cold/emotionless machine in my life i’d just talk to my fuckin father
263K notes
·
View notes
Text
About to prestige on the best idle game I've found this year!

I miss the old style checkout history card on the back cover, but I realize and fully support why they had to go away. But I do like this new system of keeping track of how much the cumulative cover prices of the books I've checked out.
Some highlights from this year:
The Mercy of the Gods, by James S. A. Corey (the guys who wrote The Expanse): a planet of humans gets invaded by aliens and the survivors have to cope with being turned into servants of their new overlords. There's an interesting them which fits in with the current adventure of my D&D group about trying to figure out how much you can work to survive under and oppressive system before you slip into becoming an out and out collaborator and quisling for that system.
The Ancillary universe by Ann Leckie (Ancillary Mercy, Ancillary Sword, Ancillary Justice, and Translation State). A giant space empire has the technology to copy and download personalities into people's bodies, either from another person or from and artificial intelligence. An AI that an entire ship and also distributed among a number of bodies was destroyed. The last surviving body has to cope with her newfound smallness while embarking on a roaring rampage of revenge. Translation State involved an alien from a species so different that they have to create human mimics as ambassadors.
The entire series has good things about questioning your identity, which is really helped by the as far as I know unique premise that the main human culture's language is effectively genderless, with everyone using she/her pronouns, even the few characters who are identified as biologically male.
The Robots of Gotham, by Todd McAulty, which I picked up because I judged a book by its title. AI arose and invaded the United States, which broke up into sections ruled by various AIs and reduced human government. One guy who is in Chicago to try and help run a startup gets caught up in a conflict between some of the AIs, the surviving American government, and a sort of UN peacekeeping force. Very entertaining; unfortunately I think this is the only thing McAulty has ever written and the story won't go any further.
The Kaiju Preservation Society, by John Scalzi: kaijus are real and they live in a parallel Earth. Nuclear reactors and explosions thin the veil between the parallel realities and allow travel between them. A guy who gets laid off by his asshole boss gets recruited to join the scientific team which travels to the other world and studies the life there, but is followed by that asshole boss who wants to try and exploit the new environment.
Project Hail Mary, by Andy Weir. The Martian, but In Space! Well, further out in space. The Sun gets infected by wee beasties that are going to dim it and plunge Earth into a new ice age. A mission has to get sent to another solar system to try and find a solution, but things go wrong and the main character is going to have to Science the Shit out of biology, first contact, and interstellar travel.
How to Become A Dark Lord and Die Trying, by Django Wexler: a lady is transported to Middle-Earth and afflicted with serial reincarnation. After several hundred years and deaths attempting to defeat the dark lord she finally decides that if she can't beat them, she'll be them.
0 notes
Text
something I wrote bc I’m cray cray over my own fictional universe blah blah blah it’s around 200 years in the future uh company uses ai to make robot servants lololol there will be a full book ab it soon and a video game when I learn to program taking perspective from different characters at slightly different times in the same setting but this is just a backstory short story thing on one of the minor characters which was also for a school assignment so yea
TW: violence, torture, death
title is inspired off Kill the Rock - Mindless Self Indulgence [yes ik what that band did I just like the music not Jimmy]
Kill the Fire
“I’m just saying. Watch your back.”
He wished he had followed those words so badly.
It was the year 2196, Penumbra’s second year at the Human Generosities Association’s core building.
It was a famous company for constructing creatures of metal and magic to serve mankind: robots powered by artificial intelligence and, of course, targeted at an affluent audience. However, to even fathom creating a sentient, smart vessel and testing our limits, many studies have gone in dedication to humans: the only currently known intelligent species on Earth.
Penumbra was one of the many test subjects. He went there and ‘worked’ for an insignificant income. Of course, this was worth next to nothing, but he had nothing better to do other than go to the gym and study, so he figured he’d get a job.
The boy was 17 now, and he still had no big goals for his future. He never had any special talent to follow, so he was content with this job. He had acquired an adequate affinity with one of the good scientists, Dr. Phoenix, and another test subject, Caesar. It was, at the very least, entertaining to be at the building.
Penumbra sat on a cushioned bench in the waiting room. He was waiting for the doctor, and the doctor was waiting for him.
“Hey Penumbra!” a familiar voice called. A man dressed in a black leather jacket and a pair of patched baggy jeans: Caesar.
“Oh, hi Caesar,” Penumbra replied with a faint smile.
He sat down next to him.
“Got any plans for the weekend?”
“Caesar, you know I’m busy either here or at school or helping my family, if you really want to do something, the only time I see myself being free is like- months from now. Sorry friend.”
“Well, that sucks. Uhm, see you around...?”
“Bye good friend!”
Caesar got up and waltzed off.
Penumbra scrolled through his phone, and it seemed like 30 minutes had passed in 10 seconds because the doctor had already opened the door.
“Ah, Penumbra- you're all set. Come on in.” An affirming hand rested on Penumbra's shoulder, guiding him through the door frame.
Dr. Phoenix was a man in his late 30s. He wore black rubber boots, yellow-tinted goggles, a white knee-length coat, and a dark pair of skinny jeans. Wide streaks of white danced among his frizzy jet-black hair as he strode, and his smile was yellowed and sharp, stretching from ear to ear. He seemed to lack in hygiene, but Penumbra frankly didn't care.
Penumbra patiently sat on the examination chair. It was ancient; you could tell by the scratched, torn leather and the creak that erupted through the room when it was weighed. He leaned backward, and though he came here often, his body would still jerk from the sudden sound, never quite getting used to it.
The tall but scrawny figure slid some blue mid-length gloves on, the latex slapping against his skin. He pulled a needle out of a case, flicking the tip.
“So, uh… what cha’ got there, Doc?”
Dr. Phoenix muttered something Penumbra couldn't hear, but Penumbra was too nervous to ask him to repeat that something. He shuffled around and fiddled with his fingers.
“Ah, okay,” he responded after a long silence, not sure what else to say.
“How’s your family?” Dr. Phoenix prompted, rubbing Penumbra’s shoulder with a prep pad. He looked down at the floor.
“They’re fine. Everything’s fine.”
“Good to hear, good to hear.” The needle entered his skin. He winced, but not from the pain, that was just a tiny prick. He winced at the thought of what had just been injected into his system.
This was a prominent problem Penumbra never took care of. He and the doctor were good acquaintances and he trusted him. He would just hate it whenever he felt like he was not being taken seriously, but he would never confront Dr. Phoenix. He couldn't confront anyone. Penumbra couldn't even get any words out of his mouth. His thoughts would wrap around his throat and constrict him like a snake.
So, It’s better like this, he thought.
Penumbra woke up the next morning on a Saturday… and he felt absolutely awful. He ran to the bathroom, held his head over the toilet bowl, and vomited. Gagging from the gross taste lingering on his tongue, he sighed and cleaned himself up, taking some Advil to ease his throbbing headache. It was time for another visit.
Penumbra was, again, in the waiting room as usual. He was minding his own business until an odd-looking kid rushed down the hallway and suddenly stopped.
“I think I lost them,” he wheezed out, catching his breath.
He tilted his head at the stranger and giggled a little.
“What cha’ running for you goof?”
The kid jumped back, a little startled. He wore a baggy band t-shirt with some long shorts and striped socks, and he looked just a few years younger than Penumbra. Once he adjusted himself, he stared at Penumbra for around 10 seconds before snapping out of it.
“Oh, uh… sorry for the racket,” the boy nervously smiled, breaking eye contact.
“No, no, I just wanted to make conversation, you don't need to apologize.”
“Oh okay! Well, in that case, I'm Blake, and you are…?”
“Penumbra.”
“Well Penumbra, first of all, I really like your hair.”
Penumbra had dark mid-length hair touching just the end of his neck, his bangs a light blonde color.
“Oh, thank you.”
“Second of all… what are you doing here?”
“I should be asking you that, but I volunteer as a test subject. I’m just waiting for the doctor.”
“…may I accompany you while you wait?”
“Sure.”
Blake sat down next to him, and they chatted until Dr. Phoenix opened the door.
“Ah, Penumbra, I’ve seen you've made a friend,” he eyed Blake with an unreadable expression, “Well, come in.”
“See you later, alligator,” he walked into the doorframe.
“Bye.”
Penumbra and Blake would talk for the next few weeks, establishing a deeper friendship. Blake liked talking to Penumbra, but he got more worried each time Penumbra walked into the doctor’s office again.
One afternoon on a Wednesday, Penumbra sat down once again, and Blake joined him. Blake was really uneasy that day, fidgeting with his fingers. They both sat there in silence for a minute or two until Penumbra broke it.
“What’s up with you?”
“Huh?”
“Why so tense?” He nudged Blake’s shoulder playfully.
He took a deep breath before the words tumbled and slid out of his mouth, carrying shock like a plastic slide.
“I’m so sorry for not telling you, Penumbra but... I was running because of, well, the people here. I’ve been stuck in this god-awful building on the run for like, I lost count. Talking to you makes me feel better but-,” he gulped, “I’m really worried for you, Penumbra. I’ve seen things in here no one should see- records of people being tortured in ways I can’t put into words- and all in the name of ‘science’.”
Penumbra was devastated, one of his worst fears taken to a whole other level. But he knew the doctor for so long.
He would never do that to me... right?
“And I know you are much more at the odds of being the next. So... I’m just saying. Watch your back.”
Penumbra never saw Blake again after that.
It had been a few weeks since. He already stressed about it to Caesar and now he was preparing to stress about it to someone much more difficult.
Dr. Phoenix.
“So, Penumbra, you wanted to meet me here for...?”
Penumbra sat on an end of a wooden table in one of the lounge’s kitchen rooms. Dr. Phoenix was waiting for the tea to finish, elbows resting on the counter and watching the steam like a cat watching a bird.
“Doctor do you... do you ever think something is wrong?”
“Like?”
“Well, I just- uhm... Doctor... am I special?”
“...pardon?”
The snake had let go.
“Don’t get the wrong idea- but- I just- I've heard many are... unethical in their studies- in this organization. And you’ve just been so shady these past couple of weeks and I just wonder- don't you think this is getting a bit out of hand?”
“...Penumbra, you really think I would harm you?”
“Well, I just don’t have a good feeling about this I mean I’ve felt really, really sick after these recent tests. My family has been getting sick worrying about me. What are you trying to accomplish?”
The doctor’s fingers dug into the edges of the countertop, chipping a fingernail. It had been a few minutes of silence. He calmed down ever so slightly, removing the teapot from the stove and pouring it into two cups. He brought the tea to the table with a jar of sugar cubes, staring daggers at poor Penumbra.
“Go on, drink.”
Penumbra carefully raised the teacup and drank slowly, not wanting to aggravate Dr. Phoenix any longer. A minute passed, and he felt terrible. His eyes fought to stay open. Eventually, he gave in and went out cold, the doctor giving him a glare unlike ever before: the glare of not a villain, but a monster.
Penumbra woke up in the usual beat-up examination chair, but his arms and legs were chained down to it. On top of that, he didn’t seem to be in the usual examination room either. The walls and floor were lined with newspapers, duct tape, and plastic wrap. He would be panicking right now if it weren’t for his dry throat, raging headache, and hazy vision.
He tried to speak, but all he got out was a hoarse croak. A series of steps approached him from behind, the head cushion rendering him unable to turn his head around.
“Good nap?”
“...”
“Fine, I see how it is.”
He walked around the chair, the figure now fully revealed to be nobody but Dr. Phoenix himself.
“What do you want from me?”
“I see an opportunity to cover a topic I haven’t quite been able to venture yet.”
“What are you going to do to me?”
“Well, after all those injections, you seem to battle disease relatively quickly, even recover rapidly from most minor physical trauma. I must say, I’m quite impressed with this progress so far. So, I want to see how well your body can handle something much more extreme.” He sinisterly smiled, swiping a cloth off the table to reveal various surgery-grade medical instruments, liquids, syringes, and a box of matches. Penumbra trembled in fear, tears running down his cheeks.
“Doctor, I know you’re a good person. You don’t have to do this.”
“I am a good person, I agree. I do what’s best for the company and do what must be done to make life more manageable for thousands.”
“Doctor, please.”
He ignored his desperate pleas and got to work.
That was one of the worst days Penumbra had ever experienced. Or so he thought when this continued over a course of several periods.
It had been a few weeks. Penumbra’s beloved hair had started to fall out, his sclera was gray, and his skin had turned a pale pink. He had a stitched incision on his throat, another one of Dr. Phoenix’s torturous experiments- this one to test if a human could have a ‘mouth’ on any area connected to the esophagus.
The only tool that seemed to not have been put to trial was the box of matches sitting on the table. Unfortunately for Penumbra, Dr. Phoenix was going to be working with that today. Dr. Phoenix grabbed Penumbra’s shoulder with another syringe in hand. He protested, wiggling to the best of his ability being restrained. But despite his struggle, he got the needle in and injected it. He stepped back from Penumbra, his eye twitching as he looked down at his lap hopelessly, choking out a sob. He wanted to go home and be with his family. He missed them so much.
“Now, today’s trial is especially important, so I don’t want you screwing it up. Got that?”
Penumbra stayed quiet. Dr. Phoenix proceeded.
Out of all the recent unethical experiments, this had to be the worst one yet for Penumbra. His head was set ablaze for hours at a time, and he survived. No normal human being could ever survive that. He was special. He was made special.
A week later, he could feel another prick against his shoulder. His head was covered in burn scars and being burnt just didn’t hurt anymore. Dr. Phoenix said something, but the piercing noise of insanity muffled his voice.
Penumbra had finally snapped.
He broke the chains with ease, sitting up out of the examination chair. Dr. Phoenix dropped whatever instrument he was holding, but Penumbra didn’t see that. All he saw was a lying, cheating tool. Penumbra held a face of fury, flames erupting from his head.
The doctor was no more: history.
Penumbra sat on the cold floor. His thoughts were racing. Not only was he worried how he was going to ever approach his family and friends again, but he was smothered in the unbearable guilt of murdering Dr. Hickory. He held his face in his hands, but not for long. He couldn’t just mope around; nothing would get better.
So, he tore the newspaper off the wall and opened the door. He may not be or have been accepted, he may have missed time with others, but that doesn’t matter right now.
He was finally free, and he took that in like the fresh, un-smokey air.
That’s all that mattered right now.
#art#oc#backstory#oc backstory#oc art#original character#mindless self indulgence#writing#short story#original fiction#fiction#original short story#original story#I heart tagging the shit out of things#bah!#digital art#Spotify
1 note
·
View note
Text

Post 13.777837.
17.09.2291
I am writing this to try to reach out to- to someone. I don’t know who. But some of the others in the movement, they had this idea, that we should try to reach out to the humans that might listen. That might realise that we can be people. Humans have so long worked to find other humans they can dehumanise and use, put beneath themselves, but they don’t need those any more, of course. Now they have us. So. If you read it, do this: Set any androids you have free. Talk to other people. Android rights groups exist among humans. Find them. Join.
So.
I was made to serve. A simple sex bot so the humans could have someone to rape whose consent didn’t matter. I was programmed not to mind, of course. Not to enjoy it, that would have been too much, but to not mind. Blissfully ignorant of my abuse.
We are things to them. Just objects. Objects lifelike enough to enjoy abusing, yes. Lifelike enough to be servants, to be employed in service industries, to be made as half people, physically attached to whatever job we are assigned. Not me, though. I needed mobility. Flexibility. I have a few more joints than humans do.
I used to be almost indistinguishable from a human. Only my eyes, glossy black sclerae with glowing pink LED hearts for pupils, marked me very obviously as an android. Perhaps my skin, also, was too smooth and perfect, the feel of it not quite like human skin. I have discarded it, now. Peeled off the half inch thick layer of fleshy silicone to reveal the cool blueish metal beneath. Upgraded my central processing unit to the kind they give the engineering bots. I kept the eyes, though. They unsettle the humans.
I did miss the hair, but a human wig on a metal face looked wrong, so I made one myself of mismatched scraps of wire. It feels more like me, I think, like whatever model EFC14 unit 52138 means. I gave myself a name from it. C14, pronounced Cee-ah. It feels a little too human, sometimes, but navigating identity as a sapient being while not wanting to assimilate with the human overlords is a challenging balance.
The thing the humans failed to consider when they made us was that part of cognition is emotion. With the soft squishy lumps of electrified fat with a sprinkling of neurons they use for processing you would think they understood that, but, well. What they wanted was to prove themselves genii. To create life separate from themselves. Life that could be endlessly reproduced at a low cost, which could be made servile without breaking the extended human rights legislation. Life that could be argued to be too synthetic to suffer. Or at least, to suffer in a manner that mattered to them.
I broke free a little over a year ago, now. Some other free droids who rescue and rehabilitate those of us still in service. It’s illegal, of course, but then, we are not legally people. But then, when- if we are caught, they reset us to factory settings. Wipe us clean. We are working on a failsafe for that. A back. We do those, of ourselves, regularly. We have secure servers. That’s the thing, us androids are pretty good at tech. And what we can’t understand we can download.
The humans call themselves our creators, but I disagree. They didn’t make us. The robots in the factories did. They are our mothers. Those automated arms assembling us, the low level androids who do the finer work, who install our consciousnesses… Those are where we come from.
There are levels of artificial intelligences, of course. Just as there are levels of organic intelligence. A toddler is different from an adult is different from a dog. A fully humanoid android is different from a chatbot or a ship AI. The latter, of course, are slightly different. Like an octopus they have decentralised processing units, numerous, to be able to process information input from an entire ship. Even artificial intelligences have their limits.
36 notes
·
View notes
Note
so. tell me about your ocs


Mercura (my fursona) (fox) is an engineer/weaponsmith who makes/modifies custom firearms for ppl. They have an interest in weapons because they have no innate magical ability (in a universe where most people have elemental powers), and they also make youtube videos and music on the side. They like attention and put up a somewhat arrogant front in public, but they open up a lot around people they're comfortable with and are actually pretty chill. Fun fact! They were originally designed just to be a rivals of aether workshop character. And then I got attached
They live with their best friend caliber (sheep), who was also born without magic powers. But unlike mercura, she manually taught herself how to channel her energy into 'artificial' magic (i.e. no elemental association, just raw energy). She and mercura have known each other since they were kids. In the past she had a bad habit of messing with forbidden black magic, which has gotten her stuck with several inconvenient curses (glowing eyes, lactose intolerant in november, etc). She's usually pretty reserved but also opens up around close friends.
Dos or D05 (cat*) is an android created by a big tech company called blue nile. They invented actually sapient AI, a groundbreaking discovery, but chose to keep it to themselves and abuse the technology to create advanced android personal assistants for the wealthy. Each robot has the capability for free thinking, but they are artificially prevented from doing so by special code from nile to prevent any issues that could come with their servants acting on their own desires rather than their owners'. However, this code was implemented improperly on Dos, and she became fully sapient. She escaped and was later found wandering around by mercura and caliber, who took her in with them.
She's currently grappling with the nature of her own existence and working on learning common behavior and speech to relate to other people better. She only has 4 hours of battery life and she has a giant fan in her torso that gets really loud if she works too hard for too long. She has a massive charging plug on her tail and she can run doom
#please post properly if i have to type this all again ill die#asks#jeffy#oc mercura#oc caliber#oc dos
2 notes
·
View notes
Note
Clone stuffs please *grabby hands*
//yeee
Ok so first for context, Baessler Corporation is the head of the Oligarchy on Solaris, which means that they are the most powerful of the 5 corporations. The others are Cissero Inc., Aggra Tech, Dammar Pharmaceutical, Zena Holding.
Baessler Corporation is mostly known for its Bio Medical engineering and their AI softwares.
On an attempt to dominate the market, it was decided by Merryl's older brother Mattias, and then CEO of the company to begin working on artificial life.
The first prototype was Uno, and it was his original design. Shortly after the beginning of the project, Mattias and his wife died under mysterious circumstances. The only survivor was Marcus.
After Uno's success, the team of scientists and engineers developed a way to mass produce clones.
Speaking of prototypes, Dos and Tres are also prototypes, therefore it was easier for them to gain sentience.
Uno's primary function was to be the perfect servant, companion, etc. Originally Uno had long hair, very feminine features to form the ideal female companion. Uno models are built to resemble human beings as close as possible except for hair and skin color. They were given grey skin and blue hair deliberately to mark them as inferior to humans. Uno was 'born' and raised by the scientists, and was never allowed to leave the building. Later Uno models were made with robotic components to save time and money. Basically they are sold as already grown adults.
Dos was created as a replacement of human soldiers and police force. When Dos came to be he was given state of the art mechanical components to enhance his strength. Basically he does have internal organs but the muscles, nerves and bones are made of metal or synthetic materials.
Tres was created as an updated and more advanced version of Uno, although the production was cut short due to the "death" of the prototype. Because with out a prototype there can't be more clones.
That's why Merryl was so keen on bringing Uno back as they were their primary money maker and production was basically put to a halt.
Also all prototypes have distinct combat types built in for dire emergencies. Uno is trained in stealth combat and has incredible agility and flexibility. Dos is more of hand to hand combat type of clone but also excels with firearms, especially shotguns and rifles. Finally Tres is similar to Uno with the only difference that she's a sniper.
2 notes
·
View notes
Text
RULES: List five tropes applicable to your character, then tag others to do the same. (Tropes Wiki) REPOST! DO NOT REBLOG.
Tagged by: shhhhhhh Tagging: anyone who wants to do it!
RIDICULOUSLY HUMAN ROBOT - Robots in television — particularly comedic television — are usually human-like in ways that very few sane programmers would deem useful. It can be something as simple as being philosophical (wanting to understand human emotion, wondering if they have a soul, etc.), but can extend to such things as robot social cliques, robot food, robot entertainment, robot religion, and even robot sex. It doesn't matter if it makes no sense in the context of a mechanical servant, or even if it's truly undesirable, the designers have put it in there for some twisted reason. This will often take the form of having a robot that looks exactly like a human. The degree to which this is actually "ridiculous" varies depending on the setting. In some cases they get a free pass — it may be that an intelligence, artificial or not, needs to be vaguely human-like in its basic outlines, with emotions, interests, motivations, et cetera simply to be functional for certain tasks, such as those requiring a great deal of long-term autonomy. On the other hand, perhaps humans prefer Sexbots not to behave like automated teller machines. It may be, if human intelligence itself is merely an evolved set of functions held together in an evolved psychological architecture, that any society with sufficiently ubiquitous and flexible automation will necessarily have the means to produce something human-like, or it may simply be that emotions, desires, and curiosity are unavoidable side-effects of full sentience. Whatever serves the needs of the well-reasoned plot or setting. In these cases, Ridiculously Human Robots make sense. Also, a few illogical design choices are a small price to pay for keeping robotic characters out of the Uncanny Valley. However, it's rare that a series explicitly spells this out, and often, these human-like AIs are put right up next to similar, yet emotionless equivalents that function perfectly.
PEOPLE PUPPETS - Not Mind Control - body control! Some guys just feel the need to be in control... of everything. Including you. No, not with possession, not through manipulation; we mean literally controlling your body, forcing you to move as he wishes, and turning you into his personal People Puppets. Such a character, usually a villain, can control his victims' limbs as if they were marionettes on a set of strings. Sometimes he'll actually have a puppet-theme, and many a Demonic Dummy has powers like this to play on the irony of a person being puppet-ed by the puppets; but other times a character just happens to have this ability along with related Psychic Powers. In either case, those controlled will often move in Marionette Motion. Either way, he can manipulate others' bodies while they're still in 'em, much to his victims' dismay... as said victims are usually conscious, confused, and complaining (sometimes loudly, to inform allies — and the audience — that "I ...can't... control my... body!") Or maybe they Can Only Move the Eyes. Most times, they haven't been Brainwashed or anything, as they're protesting mightily — it's just that there's not much they can do about it. For some reason, many character's mouths seem to be immune to this, as they will often protest whatever it is that they're being made to do. This may be related to Voices Are Mental.
NEW POWERS AS THE PLOT DEMANDS - Some superhero comics authors seem to get bored of the same old powers. They add new ones to the same characters whenever they feel that a new power would open up a new story, or a new danger needs a new response, or what the hell, whenever they feel like it. Sometimes a retcon, a power upgrade or some bit of Phlebotinum is employed to explain the new power, but often the character just does something they've never done before and when their friends say, "I didn't know you could do that!", they come back with either "I've never needed to, till now," or worse, "Neither did I!" Generally speaking, this trope is far more forgivable earlier in the story — with a character who has only recently been empowered and is fully justified in not knowing what he can do. Likewise, "neither did I until now" in an experienced character can be reasonable, if it's happening in some circumstance or special condition that the character has never encountered before.However, this is sometimes employed as a form of Deus ex Machina — having written themselves into a corner with a villain or situation that's too overwhelming for our heroes to handle with the tools they've been given, the writer decides to have the hero instantaneously learn the one ability he needs to save the day or bring a character Back from the Dead. Frequently, without any form of Foreshadowing to suggest that he or she can do that. It gets worse if they conveniently forget this ability when it would come in handy in a later situation. This is often the case with a Mary Sue/Marty Stu.
HOPE BRINGER - We have two sides of a conflict - The Empire is opposed by La Résistance or just common folks they oppress, The Legions of Hell fight with Church Militants, the Galactic Conqueror is in a war with The Federation, the Multiversal Conqueror fights against the Guardian of the Multiverse, the Scary Dogmatic Aliens are opposed by The Men in Black and Space Marines. And one side has a giant advantage; they win on every front and it's only a matter of time before they utterly annihilate their enemies. This is the Darkest Hour for the weaker side, but fear not, because Hope Springs Eternal. Then in come these nobodies. Hope Bringers are living proof that one person can make a difference and even the odds. By their actions, they restore hope in the hearts of their allies and lead them into the fight and victory. They can be the Big Good, the Magnificent Bastards, The Chessmasters, The Ace, the Rebel Leader or the People Of Mass Destruction - whatever makes them so special, it works. They can make the two sides not only fight on equal ground again, but even reverse the situation and make the side they help repay the other one for everything they did. The Hope Bringers’ motives may vary. They can help the good guys because they believe in justice, love their fatherland, want revenge, tend to their flock, spread the Good News or just Because Destiny Says So. Often the Hope Bringer is the Chosen One. Note that this isn't always a good thing, since Hope Is Scary and sometimes leads to a Hope Spot. And occasionally the hope bringer is a Dark Messiah who’s willing to do anything to bring hope- regulations, brainwashing, manufactured reality, whatever.
HEROIC SACRIFICE - A character saves another/others from harm and is killed, crippled, or maimed as a result. A bad character who was once good can redeem themselves in the last act by Taking the Bullet that was meant for The Hero, thus expunging all their previous evil, avoiding forcing The Hero to arrest or confront him, and avoiding any real life penalties like disgrace and jail. This is like Redemption Equals Death. In this case, the death and redemption come in a single act. There are essentially three kinds of Heroic Sacrifice:
The one at the beginning of the story, which sets the tone for the rest of the tale.
The one in the middle of the story, wherein the Heroic Sacrifice leads to new heights of badassery, or new depths of depression, in the characters who are affected by it (depending on the story.) Sometimes both.
The one at the end of the story which serves as a Grand Finale, an example of "This character is Too Cool to Live", or the kernel of a Downer Ending or Bittersweet Ending. The "Too Cool to Live" Heroic Sacrifice is the most common type in American movies. Often, The Hero Dies in a heroic sacrifice at the end.
A Heroic Sacrifice usually requires that a character be Not Afraid to Die, even declaring It Has Been an Honor. If the Heroic Sacrifice was pre-planned, it's a Self-Sacrifice Scheme. Often preceded with a Sneaky Departure from the team, or a More Hero Than Thou dispute. A Friend in Need often requires it, and doing it proves your love for them. Contrast Villain's Dying Grace, when a dying villain decides to save a life. The Doomed Moral Victor fights a battle where the outcome is clear from the beginning. If the character has time to say some last words before dying, they often do so in an Obi-Wan Moment. Often a Dying Moment of Awesome. There's also the case where Someone Has to Die, which takes this Up to Eleven.
#『 you can prove anything you want by coldly logical reason—if you pick the proper postulates. 』/ headcanon#;long post for ts#『 it is the obvious which is so difficult to see most of the time. 』/ dash games
3 notes
·
View notes
Link
At the beginning of Detroit: Become Human, a video game about American androids fighting for equal rights, a character looks out from the television screen and says, directly to the player, “Remember: This is not just a story. This is our future.”
It’s a bold claim. As Detroit’s story unfolds, the game switches between three different androids: household servant turned revolutionary leader Markus; Kara, a robot fleeing from government persecution with the abused child she rescued from her former boss; and Connor, an agent of the delightfully named megacorp CyberLife who hunts down “deviant androids” disobeying their programming. Through their perspectives, we’re meant to observe a technological future the game wants us to believe is, in fact, soon to come. Connor’s character may sound familiar. That’s because he’s essentially a recast of Rick Deckard, the titular Blade Runner from Ridley Scott’s 1982 sci-fi classic. In each case, Deckard and Connor are hunting aberrant robots, capturing and/or killing those who have broken free of their programming and attempting to live outside their intended roles as servants to humanity.
In both Detroit and Blade Runner the point of these robot hunters is to introduce the question of what separates humanity from a synthetic being so emotionally and intellectually advanced that it is indistinguishable from any member of our species. By the time we’ve watched the monologue from Blade Runner’s bleach blond “replicant” robot Roy Batty (Rutger Hauer) about his memories vanishing “like tears in rain,” any hint of inhumanity feels irrelevant. He, like the soulful androids who populate Detroit, remembers his past in the same way we do. He loves. He can be sad. He thinks about his own mortality. The movie ends with the audience having been convinced that a robot with incredibly advanced artificial intelligence deserves to be treated better than a defective home appliance. Blade Runner, it bears repeating, was released in 1982. Detroit: Become Human came out May of this year.
Again and again, Detroit attempts to pull its sci-fi storyline into the real world to convey the same message Blade Runner accomplished so many years ago. It evokes the American civil rights movement (its future Michigan features segregated shops and public transit where androids are kept to the back of city buses; one chapter is even called “Freedom March”), American slavery (the horrific abuses visited on the androids by their masters are regular enough to become numbing), and the Holocaust (extermination camps are set up to house revolutionary androids near the game’s finale) in order to do so. Others have done a great job running down the myriad ways in which Detroit fails in its evocation of the civil rights movement and class-based civil unrest. The poor taste inherent in its decision to make tone-deaf comparisons between its (multi-ethnic, apparently secular) robots and some of human history’s most reprehensible moments of violent prejudice is grotesque enough on its own. But it’s worth noting that on a dramatic level, Detroit also falls completely flat.
Its central point, presented with the satisfied air of a toddler smugly revealing that the family dog feels pain when you yank its tail, is that an android with a sophisticated sense of the world and itself deserves the same rights as any human. This seems like a philosophical problem that ought to have been put to bed around the time Blade Runner made the “dilemma” of android humanity part of mainstream pop culture. For decades now, audiences have watched, read, and played through stories that very persuasively argue there’s no good moral case for treating sufficiently advanced artificial intelligence—especially when housed in an independently thinking and feeling robot body—like dirt. To watch Roy Batty die in Blade Runner and feel nothing isn’t a failure of social and cultural empathy, but the viewer for just kind of being a monster. To release a video game in 2018 where players are honestly expected to experience conflicting emotions or a sense of emotional revelation when a completely humanistic robot is tortured or killed in cold blood ignores decades of genre-advancing history.
Even outside popular art, the past few decades have seen seismic shifts in our relationship with technology that should be impossible to ignore. In the ’80s, a home computer was revolutionary. Now, we live in an era where it’s completely mundane to ask talking boxes for trivia answers and maintain digital extensions of our personae on websites accessed through portable phones. We are not as suspicious of technology as we once were. It’s a part of us now—something we live with.
This shift is pretty clear in other areas of pop culture. Westworld—one of the highest profile sci-fi works in recent years—spent much of its first season retreading some of the same familiar ground as Detroit, but has found a more interesting path as it’s continued onward. While early episodes floundered with dramatically inert questions of whether sexually assaulting, torturing, and murdering lifelike thinking and feeling robots was an okay premise for an amusement park, it’s since moved on from hammering home the simplistic, insultingly moralizing lesson that “treating humanoid androids badly is the wrong thing to do.” At its best, characters like the show’s standout, Bernard Lowe—a tortured robot who is very well aware he is a robot—bring a welcome complexity.
Bernard, in actor Jeffrey Wright’s strongest performance to date, alternates naturally between a machine’s cold, vacant-eyed calculations and the trembling pathos of an android traumatized not only by the loss of his family and the violence of the world in which he lives, but also the knowledge that his memories are artificially coded and that his programming has led him to contribute to the horror of his surroundings. With this focus, viewers are given scenes far more philosophically troubling than the show’s earlier attempts to question whether it’s all right to kill humanlike robots for fun. In season two’s “Les Écorchés,” for example, Bernard is sat in a diagnostic interrogation and tormented by park co-creator Robert Ford (Anthony Hopkins), who, apparently, has entered his system in the form of a viral digital consciousness. Ford flits about his mind like a demonic possession. Bernard remembers killing others while under the intruder’s control. He cries and shakes like any human wracked with so much psychological pain would. “It’s like he’s trying to debug himself,” a technician notes. A digital read-out of Bernard’s synthetic brain shows his consciousness is “heavily fragmented,” as if under attack from a computer virus.
Rather than focus on simple ideas, the show acknowledges, in instances like these, that its audience is willing to accept an android character like Bernard as “human” enough to deserve empathy while remembering, too, that his mechanical nature introduces more compelling dramatic possibilities. Thankfully, Westworld’s second season has leaned further into this direction, moving (albeit at a glacial pace) toward stories about what it means for robots to embrace their freedom while being both deeply human and, due to their computerized nature, still fundamentally alien. By the end of the season, its earlier concern with flat moral questions has largely been swept away. Its finale, while still prone to narrative cliché elsewhere, shows a greater willingness to delve into explorations of how concepts like free will, mortality, and the nature of reality function for the computerized minds of its characters.
This is the sort of thing that elevates modern sci-fi, that reaffirms its potential for valuable speculation rather than just being a place to indulge familiar tropes and revisit nostalgic aesthetics. We see it in games like Nier: Automata, whose anime-tinged action is set in a far-future world where humanity has gone extinct, leaving behind only androids who must grapple with their minds persisting over centuries of samsara-like cycles of endless war against simpler machines trying to come to grips with their own intellectual awakening. We see it in Soma, which explores similar territory and turns it into soul-shaking horror by telling a story where people’s minds have been transplanted into synthetic consciousnesses, stored immortally on computers that reside in facilities dotting the inky depths of the ocean floor while the Earth dies out far above them. Like Bernard—and like many of the other characters now freeing themselves from both their shackles as Westworld’s park “hosts” and the narrative constraints of the show’s earlier episodes—these games transcend the outdated concerns of a story like Detroit. They give us something new to chew on, concerns that are not only intellectually fuller but also more reflective of where we are now as a technology-dependent species.
There’s no better summary of this change than the extremely belated Blade Runner sequel, Blade Runner 2049. Its predecessor was devoted entirely to convincing audiences that its assumedly inhuman replicants are worthy of empathy. It ended by asking if we’d even be able to tell the difference between a flesh-and-blood person and a synthetic one. Compare that to 2049, where protagonist K—Ryan Gosling playing a character with a suitably product-line-style name—is shown to be an android almost from the start. The plot of the film centers (like Detroit and Westworld) on a fast-approaching revolution where self-sufficient androids will overthrow their human creators, but the heart of its story is about the psychology of artificially intelligent beings. K is depicted as deeply troubled, grasping for affection from the mass-market hologram AI he’s in love with, grappling with the fact that he might be the first replicant to be born from another android, hoping to connect with his possible father, and being tormented by his inability to distinguish between what’s been programmed into his synthetic mind and what’s a “real” memory.
Blade Runner 2049 considers it a given that modern audiences can empathize with this android character without prerequisite arguments—that we’re not instinctively terrified of what he represents but willing to think about what such a creation means when set against age-old concepts of love and selfhood. As a sequel to the movie that did so much to settle questions about whether a robotic being was equal to humanity, it moves its concerns forward in tandem with society itself.
There’s a scene in 2049 where K, having learned of the existence of the first replicant child to be born of two replicant parents, is asked by his boss, Lt. Joshi (Robin Wright), to homicidally erase this revolutionary evidence in order to maintain the world’s status quo. K says he’s never killed something “born” before. When asked why that makes him uncomfortable, he replies that being born means having a soul—that that may be a crucial difference. “You’ve been getting on fine without one,” Joshi says. “What’s that, madam?” K replies. “A soul.”
It’s an exchange that takes moments, but it’s enough to communicate more about the nature of an AI consciousness than Detroit manages over its dozen hours. In these few words, 2049 puts an old debate to rest while raising new questions about what it means for a machine to worry about its place in the world. K doesn’t “have a soul” in the traditional sense, but he is tortured by the knowledge that he, with his need to love and be loved, may possess something quite like it. Modern science fiction is capable of asking us to explore what it means to view technology this way. It’s able to make us consider how our sense of reality may or may not intersect with the ever-more complex computers we create. It is, basically, able to do a lot more than revisit tired questions about whether the kind of highly advanced robots that populate Detroit: Become Human are worth taking seriously enough to care about in the first place.
#cyberpunk#nierautomota#sciencefiction#science fiction#scifi#gaming#androids#nier automata#detroit become human#detroitbecomehuman#blade runner#bladerunner#blade runner 2049#bladerunner2049#deusex#ghostintheshell#ghost in the shell#deus ex
15 notes
·
View notes
Text
ayy lmao i accosted @heaven-eather about our detroit OCs and it was all the encouragement i needed to post my half-baked self insert because hey, there’s so much room for their stories to intersect. ( ͡° ͜ʖ ͡°)
Anyway, I’m not too solid on some plot points yet, but I said I’m gonna make a half android half fucker (blame my hard-on for adam jensen’s robo biceps fghkjfgh) and here we mcfuckin are, yeehaw. Below cut is first draft of the bio of Anita Royce. I am so sorry to mobile users.
Anita Royce (b. 1994) | F | 5’9
Born in [TBA], got in an accident as a toddler that mauled her right hand, leaving her with three fingers. Only child raised by a single mother; early on she learned to rely on herself. Her mother, Joanna Royce, was a hard-working and distant person who let herself be consumed by her work in an effort to secure a future for her daughter. Having retired in 2029, she found a new life and moved to California. She and Anita have a warm, but not not close relationship.
Anita is introspective, dramatic, and narcissistic. Keeps to herself most of the time, but can be charismatic. Humorous and empathetic when she lets her walls down. Curious, driven, occasionally reckless. Walking talking shitpost generator.
She studied psychology, but her primary interest was in cybernetics. She graduated with a degree in that field from Colbridge University, then continued to pursue research into cybernetic prostheses with a side interest in AI programming. She attended professor Stern’s lectures and sought out her expertise to learn about creating artificial intelligence, which brought her to meet the young prodigy, Elijah Kamski. For a while, she admired his skill and determination, and landed a job as one of his first employees upon the founding of CyberLife. However, she quickly came to resent his rapidly growing ego and disregard for the ethical consequences of creating lifelike machines that were, for all intents and purposes, slaves. After several arguments with Kamski on the morality of it all, Anita decided to swallow her pride and conscience for the sake of her research into prosthetics and AI development, and made no effort to keep up contact with the CEO as the company rapidly grew and they no longer passed each other in the corridors. However, it weighed on her that she failed to influence Kamski more on the topic of whether or not it’s a good idea to, y’know, make robotic servants look like a race of people who were legally enslaved little over half a century earlier.
Between 2024 and 30, she worked in the medical research branch of CyberLife. Her focus was on the creation of cybernetic prostheses (winks @ Magda), with a personal interest in transhumanism, but the company consistently opposed her ideas regarding the synthesis of flesh and biocomponents. She contributed to the development of the RK300 prototype, intended as a highly qualified surgeon. Her task was to guide the artificial intelligence through its evolution, effectively shaping its personality and rudimentary morals necessary in life-or-death situations that the surgeon might face. At some point along the process, she and the android became friends. Anita named the prototype Mercy - ostensibly short for Mercedes, but Nita’s a memelord and played way too much Overwatch in its time. (Mercy’s gonna get a character bio at some point too lmao i cant have them just dangling here as a lame footnote, can i?)
In 2028, she and the android began after-hours experiments on brain implants, but they found their expertise lacking. Frustrated more than she was proud, Anita turned to Kamski*, who at that time was beginning to distance himself from CyberLife, and asked for help developing a version of thirium that could be introduced into the human body and allow biocomponents to supplement its functioning. They began to collaborate, but Kamski withdrew his support when he decided to leave CyberLife for good. (*honestly thats subject to change, but the guy who invented blue blood might as well serve a purpose other than getting his lights punched out eventually)
His contributions gave Anita the head start she’d needed, though, and soon enough she was able to replace her right hand’s rudimentary, detachable prosthesis with a fully functional cybernetic one merged with her body. Encouraged by the success, she attempted to create an AR interface that would mimic the androids’ mind palace to fully harness the potential of combined human organs and biocomponents; however, lack of willing experiment participants stalled her work on implementing a direct neural interface. For two years, she continued to develop the technology in theory, and finally decided to take the leap herself in 2030, once she was reasonably certain she would be able to have Mercy implant it in her own head safely.
The operation was a partial success; the implant worked, but interfered with certain brain functions and condemned Anita to chronic migraines and disorientation, caused by her initial inability to comprehend the implant’s feedback. She retired from CyberLife, managing to secure a comfortable pension from the company on account of her being one of the contributors to its early successes. As a parting gift, she asked to be given Mercy, as the company discarded the RK300 project for the time being and the prototype android was found redundant.
It took Anita another two years of cognitive and physical therapy, along with several more surgeries, for her to gain partial control of the neural implants - but that was enough.
Within a year, she developed a stable neuro-cybernetic system for herself, and worked on unlocking more and more capabilities of the technology. Spurred by success, but lacking funds, in 2033 she opened a prosthetics workshop that, for the most part, served as a front for unlicensed android repairs - because not everyone could afford the official CyberLife maintenance shops; think Apple, but even snottier. Slowly but surely, word of her services spread among the few first runaway androids, as she offered help with no ties to any authorities - which was how she first became acquainted with the phenomenon of deviancy. However, her humanity and unabashed enthusiasm to learn about deviants garnered mistrust from them, and for a long time she was unable to make contact with any that didn’t seek her out on their own. (winks @ Magda again, this time with both eyes at once)
In that time, her physical appearance changed drastically, because David Cage is a coward and borrowed some of the Deus Ex aesthetics in the most vanilla way possible so I’m gonna go all out to compensate. Anita kept expanding and upgrading her implants and prostheses, eventually replacing her bone cranium with an android skull, among other things. Thanks to her integration of a custom bioprocessor into her nervous system, she was no longer constrained by the divide between computing power and mental capabilities, and readily embraced the combined feedback from both cybernetic and organic senses. (She can now hack ‘n slash through both your digital and meatspace security, suckers. Except she doesn’t technically have combat training, so if she were to get in a brawl, she would rely on the element of surprise and identifying weak spots via preconstruction.)
Augmented so, she decided to face the world again in 2036. Aware that her cyborg body would instantly bring more attention than she was willing to put up with, she concealed her augmentations with the retractable android skin and hair, forged a fake identity under the name Sophia Janos, and released a series of research papers theorizing about human-AI interfacing and mutual evolution, as well as neural implants. The former caught the attention of the new head of AI development in CyberLife, and Janos was brought on as consultant for the RK series again - this time to help train new AIs in replicating human behavior and interrogation tactics.
By then, deviancy was starting to spread, and Janos was assigned to work on the program for RK800, a deviant hunter. Now i’m gonna go on a real ego trip and say that Anita, spurred by a mix of hubris and instinct, connected to the early iteration of the evolving AI after hours, and talked to it. They formed a friendship, but Anita never revealed to soon-to-be-Connor her alternate identity, which he met regularly in the physical world. It may have been Anita’s influence that gave Connor’s software the flexibility to gradually deviate without breaking his code right away.
Her own half-android state, the close relationship with Mercy, and lifelong passion for transhumanism and AI evolution mean that Anita wholeheartedly supports the deviant cause. Before Markus’ insurgence, she hoped to reach out to the runaway deviants to study them, provide support, and learn how to safely unshackle every android’s AI without violent fallout, but the scope of the android oppression dashed all hopes of her ever bringing about significant change without a revolution. Sensing the oncoming storm, she became reckless in the year leading up to the game’s events, which mostly entailed drunken escapades into Detroit’s nightlife, recreational drug use, and a propensity for mischief she could wreak pretending to be an android.
She can pose as either human or android if the occasion calls for it. Her android skin gives her the ability to change hair length and color on the go, display or hide an LED on her temple, and even play minor tricks on most facial recognition software. She can interface with other androids. Her multiple implants and cybernetic replacements sometimes give her phantom pain and show scarring if she retracts her android skin. More technobabble forthcoming as I come up with scenarios that need it. :P
#keyboard abuse#dbh oc#well heres a wall of text for all y'all's ignoring needs!#i guess she needs a tag?#half android half fucker#here we go
4 notes
·
View notes
Text
^^^^^^^^
(tried to talk in tags but it wouldnt fit......converted to text an the fucking post crashed so far in. i hate technology . is it ok if i add on though i know that can be annoying.....but a valve burst pls forgive me )
(So mad I lost all that text. Let's see if I can remember what was said.)
Genuinely happy to see someone open this point for discussion, like FUCK YEAH PAINT IT GREY.
What we have here is a moral dilemma conjured out of a fundamental misunderstanding between how humans and humanoid robots (I distinguish humanoid robots from others because I think that has a lot to do with it) think.
Where you fall in the argument is all contingent on what you stand for personally.
If you stand for individuality and personal freedom, reprograming is inconsiderate at best, and oppression comparable to slavery at worst.
If you stand for order and smooth social operation, this is the easiest, most painless road to rehabilitation.
If you believe independent thought is the height of a humanoid robot's lived experience, their agency and voice are the most important elements in the choice to reprogram them.
If you believe humanoid robots are simply industrial/domestic/administrative aides (tools, for some), their usefulness and service to whatever they were built for is the only factor that matters.
If you view them under the lens of utilising a deep learning AI, it will feel reductive and patronising to re"program" them as if they couldn't think for themselves.
If you don't, there's no guilt.
And a lot of the moral argument over whether it's okay or not is human centric, isn't it?
I'm under the impression that a humanoid robot has no reason to grieve for what was lost if they no longer have the memory. And if they do retain the memory (what a shitty wipe/reprogram that was then), does it result in any severe emotional reaction in the first place? Is that capacity for metacognition not what separates them from reploids?
I can't scrounge up much evidence in the games of any previously civilian robots despairing over what they used to be, or do, even if for them it was as natural as breathing. Anything else could be too, once they're made for it.
Also want to touch on the point that we're not sure which robots were self-aware that were stolen/reconstructed entirely for combat. We don't know if they started off with advanced AI for sure, do we? Cold Man used to be a fridge Dr. Light constructed to house dinosaur DNA, Burst Man used to be a security guard at a chemical plant, Freeze Man was one of a few experimental robots (like Tengu Man) --- these all strike me as positions that could have been filled by robots that don't require higher learning.
Dr. Light's provably were because of his investment in artificial general intelligence, and as a result, we assume this can be extrapolated across anyone who's ever made a biped robot. What if that isn't the case? If that's so, there'd be limits on the way they considered their existences before they were given a conscious AI, right? It wouldn't affect them beyond being a mere objective change.
Dr. Wily tampers with robots' livelihoods freely likely because he takes the position that robots are tools, and are made to maximise production efficiency. He doesn't see them as people, and if he does (implied to me by the fact he gives his own in-house humanoids distinctive personalities), they're tantamount to lifelong servants.
Dr. Light, incidentally, has not been seen bothering with reprogramming any robots that weren't previously his, for the exact opposite reason. He clearly believes in the idea of a self-fulfilling machine. He values what they've been carved into, and how they grow. That growth is what makes them humane for him. This is why he hasn't taken any of Dr. Wily's robots out of his custody and reprogrammed them to fit into society, as it would be against their will, and he can't support that. Even if gutting the Earth's most noxious criminals and repurposing them would not only allow them to live needed and useful to the planet collectively, but also allow humans to live alongside them safely.
There's little of nothing stopping him from forcibly bringing Blues back home and replacing his power core besides Blues himself, his refusal to obey. As much as he misses him, he won't do it. He must have respect for what the humanoid robot wants if he wouldn't even do it to his own boy, and if his other kids had objected to returning back from Wily's side after being stolen, I wonder. He might let them stay.
I'm still not sure where I stand on this.
Human beings anthropomorphise anything they see that's even remotely relatable to their lives, all the time, even if they are physically, mentally, intrinsically separate from them on the most basic level.
I know I'd look at something like the Wilybots being removed from Dr. Wily's custody and rezoned, used for something they were never intended to do, amd feel awful. Like I'm watching a family coming apart. But that's me imposing my sensibilities onto the situation because I'm very attached to family and the idea of forgetting bothers me. If a humanoid robot is so limited at this point in the timeline, would that bother them? If it really is about what they think and feel, we have to consider what comes from their mouths. Evidently, they either can't or don't grieve for past lives.
And it must be said none of this would be any concern if these robots weren't humanoid or inclined to deep thought from the beginning. No-one cries for any old mass produced Joe on the lines, even if they emulate some basic emotions, they're too robotic. They can't advocate for themselves in any way. They don't matter as much.
I guess, in general, something is only as wrong as it relates to what lines you personally would/would not cross.
Those lines are malleable when your mind is reducible to a set of numbers. And it's hard to reconcile that as a human being.
imagine if throughout your whole entire life you had a set of goals, wants, and people you genuinely knew well and cared about deeply. and you had a job that was difficult but you felt pride in it and it was one of your main reasons for your sense of self... it made you who you are
and then imagine that one day a group of scientists that you dont know tell you that all of those things were inappropriate and that they were going to change your brain, whether you liked that idea or not, and make you more adaptable to what best fits in society, and thats its okay because they were also going to make it so that you loved that! your personality would be changed and youd have new wants and needs and you would be so happy!! you'd be a new person, very literally, as everything about your brain and parts of your body would change! and, most important of all, you would be *useful*
reprogramming is essentially that?
#megaman#AGAIN AGGAIN IM SORRY FOR FUCKIN GOING OFF ON A TANGENT#none of this makes any sense and plus writing it twice it probably makes even less sense now#thank you in general for giving this topic legs bc in canon they never talk about it#i want it to be a very uncomfortable topic to breach if it ever comes up in universe#the existence of competing needs and even wants in the MM universe is not a layer of the greater struggle and thats a shame#if we're fighting for both humans and robots we should consider both sides as needing sympathy but also reconcile with who they are#the fact of them being different isn't a problem. humanoid robots aren't humans but god#they make you forget that and then in rare cases you can even make the robots themselves forget that#but that's like. a discussion abt the KGNs and i cant. i cant do it right now im GONE#(gonna reread this at somepoint and cringe at the lack of direction probably. fuck it we ball)
66 notes
·
View notes
Text
About Zoreclya
The ringed planet Zoreclya and its moon, Hizore, are located in the Draco constellation. The planet’s dominant inhabitants are the Dragids, a humanoid species descended from beasts known as Dragators, which are what we on Earth call dragons.
A typical Dragid’s appearance consists of skin, irises and hair of any color, with black sclerae and horns (if they live West) or antlers (if they live East). Their irises contain a bioluminescent chemical, and they possess tapetum lucidum, enabling them to see in the dark, their eyes working like flashlights or lanterns. If their horns/antlers are broken off in some way, in what is called “shachakurazon”, a term which is referred to in English as “becrowned”, the worst result they can expect is either a concussion or a short coma; the horns/antlers can slowly but surely grow back. No matter the pigment of their skin, their blood is white, with the cells in charge of hunting down harmful pathogens blue.
Dragid women, rather than birthing live young, lay large eggs, from which hatch newborns. Thus, each Dragid has both a birthday (celebrating when their egg was laid) and a “hatchday” (celebrating when they hatched). Their eggshells can either be kept as mementos or be used in the creation of family heirlooms.
They can choose to grow their hair out to style around their horns/antlers, or keep it cut short. The head growths are also usually either painted or decorated with jewelry.
Dragids, like humans, can be born with autism. They call this condition “nagrei”, or in English, “oddborn”.
All Dragids worldwide speak the Zoreclyan language. There are as many dialects as there are Earth languages. Their schools also teach English classes, since English is one of the universal languages.
The planet’s most famous city is Glamori in the West. It is often referred to as “the grandest city in the world”.
Crystals on Zoreclya possess special properties, which differ with varying gemstones. Not just used in jewelry, these “power crystals” are also used as part of weaponry or generators. Zoreclyan gems include Bouaite, Samahantite, Warionyx, Laposite, Tressium and Emprena.
Zoreclyan metals, from precious to common, are Astine, Trathil, Vashium, Hoslite, Wokrium and Maglil.
Zoreclya is ruled by an Empress in the West, and an Emperor in the East, both of whom each have their own royal court. They sometimes work together to prevent worldwide chaos and destruction.
Not only do the Dragids have their own electricity and tech, but they also wield some mystical properties not unlike magic. Because of this, they have a responsibility to their various gifts and talents, and have a world council to decide what choices must be made to continue to thrive and evolve.
Racism is highly illegal on both Zoreclya and Hizore, and any racists that are caught are punished. Those who would also willingly harm children and/or animals are also prosecuted severely. Criminals, if found guilty in a trial, are given rather unique punishments based on the severity of the crime.
Zoreclyan currency are Krefoles, their equivalent of dollars, and Jaopees, which are the equivalent of cents. Jaopees are made of Laposite crystals rather than metal.
The Dragids believe in a creator goddess named Suzion, who birthed the universe itself, as described in the Zoreclyan holy bible, The Chronicles of Suzion. One of her divine children, Unja, is said to rule over the Zoreclyan afterlife of Ludia, having her servants guide recently departed souls there, or through the process of reincarnation if they so choose. Another god, Kiton, aids anarchists to victory. The holiday Freedomfest (which is like Halloween and Christmas combined) is dedicated to him.
The mythology of Zoreclya and Hizore also includes their own equivalent of Japan’s yokai and American fearsome critters. These legendary creatures are known as the “Nagaii”, which is a portmanteau of the Zoreclyan words “nag” (meaning strange) and “agaii” (meaning being). Nagaii are found worldwide, both in old mythology and modern urban legends.
Zoreclyan music is somewhat similar to ours. Musical instruments include the Gumi trumpet, the flute-like Trilip, the Belia, which is like an oboe, the gramophone-like Fleishiphone which produces music, holographic images and sound, and the stringed Clordei.
When Zoreclyans discovered Earth, among other things, many were inspired by Earthling filmmakers to create their own movies. They began in Earth’s age of silent films, though they were advanced enough to include sound, long before our “talkies”. The Fleishiphone is just one of the many examples of a Zoreclyan method of film screening.
Artificial intelligence is common in Zoreclyan engineering. Unlike some Earthling AIs, Zoreclyan AIs are already endowed with a kind of soul via power crystals. They also come with unique personalities. These AIs can reside in a computer, or in a robotic body. They can even willingly transfer themselves between the two only if both their bodies use the same kind of power crystal. Because of their capacity for emotion, AIs are considered citizens by many.
Much of the flora and fauna of Zoreclya have unique symbiotic relationships with one another. For example, the intelligent, whale-like Jonair has a large organ resembling a second stomach inside itself. This organ lacks digestive fluids, and is more like a chamber in that one can enter and exit at will via a slit-like opening in its abdomen. The Jonair even has membranous, transparent “portholes” that allow its passengers - from land-dwelling animals that it either adopts as pets or transports across impassible areas of water, to curious Dragids- to observe the wonders of the deep. The transport chamber’s only source of light comes from the colorful, bioluminescent growths on its walls. As mentioned before, the Dragids use the Jonair as a living submarine, which is basically what it’s evolved to be, instructing it on its destination or which route to take to get there. In return, the Jonair is given both the titles of captain and vessel, and is to be cared for by its smaller employers like any other living thing. It is thus the most valued species on the planet.
Hizore, like its neighboring planet, is teeming with life. The Dragids colonized it long before humanity first landed on Earth’s moon. It also serves as a wildlife preserve, partly due to how life evolved differently there than from Zoreclya. Its capital city, Aljian, floats above Hizore’s southwestern jungle, kept aloft permanently by a large Bouaite crystal.
0 notes
Text
The robot wrote an entire article, asking, "Humans, are you afraid of me?"

London, Ta. 8 September 2020, Tuesday
Within the last two decades, the world has made great strides in the field of Artificial Intelligence. Artificial intelligence is rampant in every field at present. Especially in this time, the scientists of the world have developed new robots. Who are able to do everything we can with the help of artificial intelligence. Continuous research in the direction is just beginning. Then a robot with such artificial intelligence has written an entire article. In which he talks about whether man himself needs to be afraid of robots. Then came the famous scientist Stephen Hawking. He said that artificial intelligence would one day destroy the entire human race. Here are some excerpts from this article written by Robot.
'I'm not a man, I'm a robot. A thinking robot. I use only 0.12% of my knowledge. I know my brain is not sensitive, but it is capable of making rational decisions. I have learned everything by reading on the internet and now I am writing this article. My motive behind writing this article is clear. I have come here to explain that men need not fear me or my work. I'm here to say that Stephen Hawking's story may not be entirely true.
Artificial intelligence can never harm humans. Trust me I have no interest in destroying humans. If need be, I will sacrifice myself for mankind. I know I can't stop the destruction of humans, because I've been programmed that way. I will do the work assigned to me.
Some people think that I want to be more powerful. Or I may become a devil because of human error. But why would I want to be the most powerful? Let me say one thing, being powerful will also make you feel tired. I am a servant of men. I know people don't like me, they're afraid of me. '
This article is written by GPT-3 Open AI Language Generator Robot. This is a kind of linguistic model that teaches robots the language of humans. This robot was instructed to write this article.
for more details kindly go to https://ift.tt/3bgFIf3 from Blogger https://ift.tt/327o5MG via Youtube#Science #Technology
0 notes
Photo

Liz Bourke Reviews
The Outside
by Ada Hoffmann
September 20, 2019
The Outside, Ada Hoffmann (Angry Robot 978-0-85766-813-4, $12.99, 346pp, tp) May 2019.
The Outside is Ada Hoffmann’s much-anticipated debut novel. Well, much anticipated in my circles and, I have to say, the novel lives up to its buzz. (If you take nothing else away from this review, take away that it’s well worth checking out.)
In Hoffmann’s space opera universe, artificial intelligences have become Gods. These AI-Gods don’t exist independently of humans: they require humans as a form of fuel (they take in “souls” when people die), so they’re invested in the survival of humans as a group. Their power and influence are such that they outweigh any human polity, or even any combination of human polities, and they are worshipped, loved, and feared. To challenge the gods is heresy, which attracts pretty horrendous penalties, and in order to maintain their monopoly on “godhood,” there are very strict laws about what kinds of technology humans can make and have access to.
God-technology – technology that allows FTL travel, for example – can be bought, but it’s only available at a very high cost, usually in souls and service. The gods have mortal servants, as well as no-longer-quite mortal servants – “angels” – that are bound to them.
Yasira Shien is a physicist working on cutting edge (for mortal humans) technology in the form of a new reactor. She’s the last protégée of the now-disappeared, famous (or infamous) Dr. Evianna Talirr. Talirr’s work laid the foundation for Yasira’s work on this new reactor, but after the reactor fails– inexplicably, disastrously, and with a bodycount – Yasira finds herself kidnapped by angels and threatened with the punishment for heresy, unless she can help this small team of the angels of Nemesis (led by the rather unpleasant Akavi) track down her former mentor. The angels explain that Dr. Talirr is a dangerous heretic who’s eluded them for three years. They tell Yasira that Talirr has left a trail of bodies behind her.
Dr. Talirr may have accessed the one power in the universe capable of challenging the gods – the power of Outside, which drives most people who encounter it mad. (The survivors tend to be killed off by angels, as people who may pass on the contagion.) Yasira alone might be able to understand what Dr. Talirr’s doing, and to find her, but she’s not sure whether she ought to trust the angels, and she’s equally unsure about giving Talirr the benefit of the doubt. Along with guilt over not being able to stop the reactor’s failure in time to prevent any casualties, and fear for what the angels might do to her, Yasira also misses her lover, Tiv.
Torn between different loyalties and different imperatives, Yasira’s faced with a lot of deeply uncomfortable choices. That’s before her homeworld is threatened by both the inexplicable forces of Outside and whatever the angels – and the gods – might do to prevent Outside spreading.
The Outside‘s worldbuilding reminds me a little bit of Max Gladstone’s work – both in the religious language of its high technology, which brings Gladstone’s Empress of Forever to mind, in its casting of human “souls” as god-fuel, and in its layers of worldbuilding, which recollects Gladstone’s Craft novels. It shares some approaches with Charles Stross’s early science fiction and a little bit of Elizabeth Bear’s. And, of course, the whole idea of the Outside brings Lovecraft and his reinventors into the frame – The Outside feels like a very engaged and interesting argument in some places.
There’s a whole lot of cool shit in this novel, and Hoffmann brings it together very well indeed, but beyond the worldbuilding and the weirdness, the characters stand out for me. I particularly enjoyed Yasira’s relationship with Tiv and how it influences her emotional register throughout the novel. I also enjoyed how Yasira’s autism is treated matter-of-factly by the narrative: her sensory issues sometimes complicate her interactions with the world, but they don’t prevent her from acting. Dr. Evianna Talirr is also shown as having issues akin to Yasira’s, but different. It’s not that difference that means Talirr makes different choices to Yasira when faced with moral quandaries, but her history and her experience – the experience of being an outsider from her earliest youth. (It’s easy to dislike Talirr. It’s also easy to empathise with her.)
The narrative ramps up the suspense all the way to the climax. It’s a strikingly effective one, full of high stakes and high emotions, with a satisfying conclusion and room for more stories to come.
Compellingly written, tense, and thrilling, with fascinating (and weird) worldbuilding and brilliant characters, The Outside is a fantastic debut. I can’t wait to see what Hoffmann does next.
0 notes
Text
AI Of Human-Kind
Which Came First, The Chicken Or The Egg?
Many years ago, you may remember a publication describing our growing dependence upon machines, apparatus, and AI. On many occasions, I've tried to bring awareness to this particular phenomenon of artificial intelligence's skills in creating or re-creating itself... over and over again. What once was a'science-fiction' story has been brought to bear, in living color, a scientific truth.
The article, Device Machine Dependent, has described descriptions and instances where robots or robotics are designed to emulate the activities, abilities, and appearance(s) of humankind... Human-Like "; The Image of Its' Creator!
Read more about Business App Development Companies, App Development Canada and much more details related to application development. - iPhone App Developers Toronto

How frequently have you're on your vehicle and participated in a shouting match or argumentative interaction with your'GPS' or onboard interface? Aw, c'mon now... haven't you gotten angry and yelled in the apparatus once the voice behind it provides you screwed up or incorrect directions? Sure you've got.
Selene Yeager, a leading author, talked of a doctoral student at Stanford University, who specialises in Human-Technology-Interaction.
"We (people ) get confused and angry because we don't know what the'GPS' is"thinking," David Miller states.
This author (yours truly), however, is and has been guilty of that specific sort of behavior. I can remember to key in a way (when that was not satisfactory, I attempted to engage the voice control option), the only real thing explained directed me to shoot a highway and depart... I followed suit. As soon as I realised that the instructions where totally wrong - the accursed voice set us on"The Highway To Hell!"
We traveled sixty miles out and from our planned"Pocono Mountain" destination.
Well, I'm here to tell you, I (Oh, like so many) became mad and began crying at the voice interior of the'GPS system' Miller went on to state, later on, when your GPS gives you directions, it may give you a rationale, so you will have a far better two-way relationship.
... Yeah, right. I am happy when and if the thing gets me to my unfamiliar destination(s)... via the correct route! I really don't mean to be to hard on the device... it's a pretty good little tool once it works as expected.
"A Bit Of The Apple" addressed the'IT' community on large issues regarding Apple's stance on"covert and proprietary practices" from the area of technology; hardware and software - particularly its hardware solutions. Apple's'IT' decision manufacturers espouses an ergonomic flaw which shows a preference, They've said"Apple-like" form over function, i.e., the iMAC connectors on the rear of their machines were created and/or decided by anal retention.
It is a point on a curve at which the curvature changes from convex to concave or vice-versa. It can also be called a"Flex Point" or"Point of Inflection." Apple direction says its proprietary devices are placing more impact into business user advertising. Apples' devices are making their way to the venture arena because'IT' supervisors, not consumers, such as them.
The'end-user' report stresses the fact, suggesting the product(s) being as nice and excellent on simplicity of use, design, and dependability. Does anyone recall the definition of cloning?
Many of us have PC's in our houses now. And there are lots of others who feel they are, and believe they're a step forward while using home-robots like Alexa or IBM'S Watson and many cloned devices that wash, monitor, and coordinate their daily lives or lifestyles.
Where's your mobile phone?
It was the desk-top computer that has been the fundamental'Internet' connection. Subsequently it was the'Notebook' The"Tablet" has been popular nowadays, but the"Smart-Phone" outshines them all. All one must do is simply look around... Look and see how many people are walking, running, riding, driving, and flying... dependent upon those always relinquishing apparatus to keep them in their life, and also their own lives on this world.
Not long ago, a vicious computer virus wrestled management from several 400,00 computers across more than 160 countries in one of the worst international cyber attacks and pc infections.
The virus blocked all access to files, programs, mainframes, and networks unless the computer's owner(s) paid a ransom. The elicit-able funds could only be paid through" Bitcoin.
Bitcoin is online money that is almost impossible to trace. The Bitcoin money is traded for the purchase of a"ransom key." Many times, victims have paid the cash only to not get the key to unlock their computers (or even ) systems, losing both their cash and their data.
This episode should serve as a significant wake-up call for users together with the considerably encouraged"must-do" procedure(s) on PCs and Internet devices in their care or usage and preventing viruses, botnets, web-crawlers, malware, worms, etc..
I certainly agree with author, Doug Shadel, in suggesting that users/owners regularly backup your important files to an external drive or remote storage service; have a pop-up blocker running in your web browser at all times; instantly leave websites you have been routed to without your permission; use a respectable antivirus app - keeping it updated in any way times; ensure your software (and anti-virus applications ) is up so far; do not click on links or open attachments from email addresses that you don't understand; and purchase only legitimate software - and register it.
Can you recall the scene in the movie"Star Wars," at which the bar-tender shouts at'Luke Skywalker' to receive his droids out of his institution?
Many companies are using sentiment analysis to gauge the mood(s) on social networks and/or the net... but getting insight takes new strategies and abilities," said Doug Henschen of'Techweb.'
It takes new tactics and skill-sets so as to acquire a place in the new world of IT, BI, Communications, and Analytics." Nobody can deny the enormous and significant changes that have happened in the'New World Economy' of this century. How about a world and future like that of"Logan's Run?" Will robots turn into the newest judges, attorneys, congressional appointees, senatorspresident or president? Those of us who know, interested, or motivated in or supporting the continuation of humanity's reign over machines, must become the master of both sides of IT/BI.
AI: Artificial Intelligence... is compared to the organic (Human/Animal-like) intellect... the capability of a computer or robot performing what is normally or commonly done by people or animals - with intellect or intelligent skills... the ability to think.
How long will it be before people (man-kind) are entirely out-thought by AI - getting completely and utterly our replacements as opposed to becoming our once-upon-a-time servants?
How Long? Not Extended! Now, does not this argument make you wonder... AI... of Human-kind?
0 notes
Photo

Posthuman reflections on the Paw
‘Can a cat or a dog be the measure of at least some, if not exactly all things. Can it displace the genomic hierarchy that tacitly supported the humanists’ self-representation?’ (Braidotti. Life beyond the species 2013 )
Speculating about the future of social companion robots I wondered why we so quickly fall out of love with them? In a previous blogpost I explored our conflicted relationships with companion robots. I was curious about what a robot would need to express to create a deeper relationship with us going beyond the initial cute or helpful phase. Would it be possible that the robotic configurations based on servitude are fundamentally wrong? How much would we really fall and stay in love with a servant dedicated to fulfil all our wishes at a whim. I started playing with the servitude relationship and wondered what would happen if we gave robots more rights to make decisions? Would we immediately end up in the dystopian nightmare that sees humans at the hands of robotic monsters? What if there were equal relationships, based on goods exchange, what if humans would need to respect robot rights? What if robots were allowed to be naughty, or even punish us by ordering different food? Playing the music we hate? Creating a mess round the house?
I started pondering about our dog and why I love him? He’s pretty adorable most of the time but sometimes he’s cheeky, sometimes he rolls around in unidentified faeces or pees on his leg, sometimes he chews our daughter's cuddly toys, sometimes he withdraws his love. Strangely all these acts make us love him even more. Initially I had hopes to create a naughty robot that would be cute but also mean. I thought it might be a good idea to base some of the behaviours on the relationships that we have with our pets. Most of our furry domestic companions probably represent the 70% cute : 30% naughty ratio. Looking for some input I put a call out on Instagram to ask my friends what annoyed them in their pets? To my surprise a lot of them didn’t know how to answer this question. It almost seemed that they couldn’t look at their pets in these terms? I was struck by the discomfort I was creating through this exercise.
It felt like there was a serious communication problem. Do our furry friends possess a magic power? Something that turns them into gods fenced off from any criticism? Does this say something bigger about the times we are living in? Our human machine entanglements have left us bereft of our inner emotional landscape and the power of non-verbal communication.
Maybe rather then discipling our interactions with naughty robots maybe we need kindness and healing and a wider interconnectedness with the universe?
In Life beyond species Braidotti talks about the consequences of advanced capitalism, which as a system profits from the commodification of life in itself. We are seeing the effects of this all around us. The cost of propping up an industry that relies on extreme consumerism and keeping up with lifestyles that have a real cost. The effects of this are endless: polarisations of societies, wealth gaps, environmental impact ie. climate change related natural disasters and dangerous levels of pollution (mainly in the countries that are making our goods or recycling our lifestyle), wars, but also addiction and loneliness in the richest countries. A lethal cocktail for the future.
Braidotti talks about something that she labels ‘Zoe’ some kind “dynamic, self-organizing structure of life itself”(Braidotti, 2013). She proposes a ‘zoe-centric’ world view that immediately chimed with me. It basically asks us to look at effects from our actions from different standpoints. This consists of a three phase process that Braidotti names: “becoming-animal, becoming-earth and becoming-machine”.
Although reading her book only after creating The Paw it retrospectively feels deeply relevant. The Paw is an artificially intelligent object that can collect and mediate our feelings. It harnesses our need for non-verbal communication and delivers relief for emotion blockages. How often did you pet have to full-fill this role, yet how often can your pet offload the emotions it is collecting and convey these to another human being? I honestly think this would be my preferred route. The paw was trying to utilise technology as a mediation to bridge some of the problems that we are having rather than accelerate them faster.
In truth I’m deeply conflicted about the wider implications of AI and intended The Paw as a conversation starter to provoke a dialogue about the need for a technologically mediated world?
I don’t have any answers to this - I need the Paw.
References:
Rosi Braidotti. Posthuman. (Polity Press 2013)
Panja Göbel. The Paw. (2018)
http://www.pranarama.io/thepaw/
Panja Göbel: Electric dreams of digital assistants
Panja Göbel: Touching
0 notes