Tumgik
#it’s from two months agi
sheerfreesia007 · 5 days
Text
Confession to Quench Your Thirst
Pairing: Changbin x Reader
Word count: 1,203
Content warnings: Fluff
Summary: Changbin needs your help with something that he forgot back at his apartment. What happens when he blurts out a confession as his way to try and convince you to help him?
Agi: Baby
Tumblr media
You frowned softly as you fought with the stray strands of hair that refused to lay flat on your now straightened hair. Today you were supposed to be meeting up with some of your girlfriends for brunch to sit and catch up after not having a lot of time to meet. It had been almost a full month of nonstop work for all of you and you were desperate for some girl time. Huffing softly you gritted your teeth before giving up on your hair as you heard your cellphone begin to ring with Changbin’s ringtone.
Smiling softly you looked over to where it sat on your vanity table and saw Changbin’s goofy face filling the screen as it rang. You easily slid your finger across the screen and answered the call, putting it on speaker phone before turning back to your mirror and trying once more to get your flyaway hairs to lay flat.
“City morgue, you stab ‘em we slab ‘em!” you answer cheerfully with a happy grin on your face. The answering sigh makes you chuckle excitedly before Changbin answers.
“You know I don’t understand you when you do this, Agi.” he whines softly and you laugh even louder at his slight annoyance of your joke. Your stomach also flutters and somersaults at the sweet nickname he always calls you when you two talk.
“Aw c’mon Bin Man! It’s just a joke. I’ve got loads of them.” you tell him cheerily and he sighs once more before you indulge him. “What do ya need Bin Man? I thought you were pumping iron today.” you tease him.
“I swear-” he begins as you laugh delightedly before he cuts himself off. “I am at the gym today but I need a favor from my favorite Agi.” he says suddenly and almost pleading with you. You smile softly knowing that he probably forgot something and needed it for his gym session.
“What’d you forget?” you ask and he squawks loudly as you grin.
“Why do you think I forgot something Agi?” he asks, offended and you laugh softly at his question before shaking your head.
“Bin Man, we all know you can’t multitask to save your life sweetheart. It’s not that much of a secret.” you tease him gently and he huffs into the phone causing you to laugh once more. “C’mon just tell me what you need me to grab for you. I’m heading out to brunch with the girls soon so tell me now so I have enough time to get it to you before heading to brunch.” you explain to him and hear him grumble lowly into the phone.
“I forgot my water bottle and the gym has run out of their water bottles too. Can you please grab it for me from the apartment?” he asks pleadingly and you smile knowingly at the man.
“I don’t know Bin.” you begin to say as you grab your purse from your bed and double check to make sure your wallet and keys are inside before you slip a pair of sunglasses over your face and start walking to your front door.
“I’ll make it up to you if you just do me this one favor, Agi I promise!” he cries into the phone and you sigh softly trying to sound as put out for doing this favor for him as you possibly can just to tease him further. You love teasing Changbin because he’s such an expressive man as it is but when he’s being teased it’s so much more. You can just picture him now standing in the gym on the phone with you, bouncing from one foot to the other as he anxiously looks around the gym while his cheeks heat up at your teasing. You know his lips are twisting into a slight pout.
“I don’t know.” you say softly as you make your way down the stairs of your apartment complex and over to Chanbin and Hyunjin’s door on the floor below yours, thankful that you live in the same apartment complex as all the boys. You had been so excited when Changbin had told you that they were moving into your apartment complex because it meant that you would get to spend more time with him. And ever since they had moved you would either be at his apartment or him and Hyunjin would invade yours, it was a great set up and one that you hoped stayed that way for awhile. “Where’s Hyunjin? Why can’t he bring you the water bottle?” you ask as you slip your key into his door and easily walk inside, you knew as soon as he asked you for something you would do whatever you could to give it to him. Your relationship was just a constant give and take between the two of you, always making sure to check on each other and give each other everything and whatever you needed. You loved that about your relationship with Changbin, you knew without a doubt that if you were ever in need of something he would make sure you got one way or another and it was vice versa with you for him.
“He’s in Paris for a fashion show. Agi I swear if I had anyone else to grab my bottle I’d ask them.” he whines softly and you tut at him trying to calm down. You walk into their kitchen and spot the lime green water bottle with a smile forming on your lips.
“What’s the magic word?” you ask teasingly as you grab the bottle and turn around to head out of their apartment.
“I love you.” he responds instantaneously and you feel your eyes widen as you grow silent replaying his words. You can tell Changbin is shocked at his bold confession as well since he hasn’t made a noise since saying those three little words. But just as you take in that moment of pure love between the two of you he’s starting to stutter as nerves grip him. “I-I I’m s-s-”
“I’m gonna kiss you when I get there. You better be ready for it.” you blurt out suddenly and grin widely as happiness and giddiness fills your body. Suddenly you hear his soft giggles and your heart soars with happiness at how cute he sounds.
“I get a kiss just for saying I love you?” he asks, sounding more confident now and you grin as you make it to your car with his water bottle. “What do I get if I tell you I’ve been in love with you for months now?” he asks teasingly now.
“Face full of kisses and your own I love you too confession.” you tell him boldly and his answering giggle rings over your phone speakers as you begin driving to him. “You’ve got about ten minutes to prep those plush lips babe I’m already on the way.” you tell him smugly and you can’t help your own chuckle as Changbin’s giggle rings out again.
“I can’t wait Agi. Drive safely.” he coos at you before you both hang up. Your grin is near blinding as you hurriedly drive to the gym to meet up with your boyfriend.
SKZ Taglist: @intartaruginha, @kayleefriedchicken
90 notes · View notes
kyra45 · 5 months
Text
Names linked to a scammer who’s been taking their posts/pfp from legitimate fundraisers usually from Palestine but also can be seen doing other scams. Please keep in mind these names are often stolen from real people and it’s suggested the scammer is actually in Kenya using a VPN to hide their location. They have been running scams for almost two months now and sending insults towards anyone who calls them out directly.
Nour Samar | maryline lucy | Fred Odhiambo | Jeff Owino | Valentine Nakuti | Conslata Obwanga | JACINTA SITATI | David Okoth | Martín Mutugi | Daudi Likuyani | William Ngonyo | Fred Agy | George Ochieng | BONFACE ODHIAMBO | Sila Keli | John Chacha | benson komen | Alvin Omondi | Jacinta Sitati | Daudi Likuyani | Noah Keter | Faith Joram | Rawan AbuMahady (any PayPal’s using this name are scammers who have stolen it off a real GoFundMe. The real person does not have a PayPal account that they post on tumblr.) | Asnet Wangila | Remmy Cheptau | HAMDI AHMED | Johy Chacha | Aisha Mahmood | Salima Abdallah | Raha Habib | Grahy Marwa | Shariff Salim
38 notes · View notes
fipindustries · 8 months
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
 If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Tumblr media
 Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even  personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
 Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this).  Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind  to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations.  An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not  to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is  my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
 To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky  born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly  molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and  worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
58 notes · View notes
Text
More than 100 tenants in a Thorncliffe Park apartment complex have stopped paying rent to protest proposed above-guideline increases of almost 10 per cent over the last two years, according to a tenant advocacy group.
The rent strike, which started just over a month ago, is the second in Toronto spurred on by above-guideline rent increases (AGIs) that CBC Toronto has reported on within the last week. 
Residents of a three-building apartment complex on Thorncliffe Park Drive were given notice of rent increases beginning on May 1, varying from 4.94 per cent to 5.5 per cent, according to copies of 2022 and 2023 notices shared with CBC Toronto by the Federation of Metro Tenants' Associations (FMTA). Last year, the proposed increase was 4.2 per cent.
Full article
Tagging: @politicsofcanada
145 notes · View notes
w3bgrl · 6 months
Text
billie + post-debut friends
Tumblr media
kim chaewon + kang juyeon = wonyeon
juyeon’s contact: skz bias 🐹🐼
chaewon’s contact: wonnie fairy
one of juyeon’s only friends that she actually initiated. which is not to say that she went out of her way to say hello; it was actually a total accident they met!
around mid-2019 izone & skz happened to be working in the same building which lead to a lot of meek bows as they passed one another through the halls.
but when juyeon found herself running behind schedule after taking pictures of the sunset she decided to sprint back to her destination, moving with such haste that she ended up fully plowing into poor kim chaewon, knocking them both over in the process.
juyeon was not only mortified that she just body slammed a fellow idol to the tile floor, an incident that could likely be used as evidence against her in the future, but upon helping the younger girl up among her stream of apologies chu realized it was actually her bias!
thankfully chaewon was very graceful in her response and even mentioned something about being “honored!”
the girls quickly mended any wounds formed from their tumble (strictly emotionally as both had to later explain the bruises on their legs) and hurriedly explained their adoration for one another, notably juyeon’s participation in the voting for pd48 which just so happened to be in favor of said kim chaewon!
chaewon was absolutely thrilled to hear this and even more thrilled when juyeon asked, “can we exchange numbers?”
and the rest is history! juyeon texted her later that night to apologize again only to be met with the same dismissive response from the girl group member, reassuring her that she was actually happy it happened.
since then the girls have spent plenty of time outside of work to hang out and usually visit one another when their schedules line up!
ju was also the very first fearnot! if you stop by one of her lives you can often hear le sserafim playing in the background.
Tumblr media
kang seulgi + kang juyeon = kang sisters
juyeon’s contact: bibi agi <3
seulgi’s contact: soogi 🐻
one of billie’s most popular friendships outside of skz is with that of red velvet’s seulgi; the Deft yet Ditzy Dancing Duo!
this friendship came about after seulgi, in her own words, had “grown enamored with billie’s stage presence and was encouraged by joy to reach out.”
although seulgi had secretly admired juyeon from afar for a few months before finally stepping out of her comfort zone to say hello, her role as a music bank mc during skz ‘case 143’ era would make for the perfect opportunity to greet her hoobae naturally with the same bright smile reveluv’s know and love, even though they were technically pitted against one another for first place (which was subsequently awarded to stray kids)
juyeon would later explain how friendly seulgi was prior to sharing the stage in wait for the results and afterwards personally came to congratulate them on their win, which is how they ended up exchanging numbers and setting up a coffee date.
and to their surprise, after eventually getting past the awkward phase with one another, these two found that they were almost perfectly compatible! bright and silly, talented and disciplined, seulgi and juyeon were like long lost sisters with a knack for their craft.
while they haven’t had the chance to work with one another just yet, they do hang out often, and billie was even featured on an episode of seulgi.zip!
the kang sisters have even been seen out in public together by eagle eyed fans catching them grabbing a bite to eat or shopping hand in hand <3
Tumblr media
son hyeju + kang juyeon = 2ju
juyeon’s contact: ju hyung 😎
hyeju’s contact: wolfie hye 🤱
a new friend made from an old friend; son hyeju!
introduced to one another by juyeon’s hanlim buddy and hyeju’s bandmate, chuu, the girls were basically forced to talk to one another after jiwoo decided they would be good friends.
and though they were/are both quite shy and quiet when meeting new people, it became very apparent quite quickly just how right jiwoo was!
as it turns out, juyeon and hyeju are actually quite similar. they are often first perceived as intimidating by those who don’t know them due to their resting ‘i don’t want to be here’ face but, once given a chance to warm up, can be the sweetest and funniest girls you’ll ever meet.
since being introduced in 2020 2ju often kept in contact primarily over the phone as a result of loona’s strict management label, sending texts and memes throughout the day or calling at night to complain about so-and-so, leading to a strong bond being formed over the encouragement they lent to one another.
hyeju in specific really looked up to juyeon as somewhat of a role model — an older girl who specialized in the same position as herself while also offering her ear to listen to all of the struggles and complaints she was dealing with.
once again due to hyeju’s authoritarian company the girls weren’t able to hang out in person literally ever, but they remained very close over the phone for a long time up until the deserved downfall of bbc, which is when hyeju was finally able to dissolve her contract and live the most freely she had since predebut.
then, as later explained by the younger girl after redebuting under a different company, billie and hyeju got to have their first ever in-person interaction in late february of 2023. she would go on to explain how anxious she was to see joong, but juyeon couldn’t have been more excited. they spent the day together talking over hardships and enjoying the real company of their now-close friend.
2ju now make it a tradition to hang out at least once a month and talk out their frustrations over whatever mouth-watering meal they desired, even using their freedom to practice and fine-tune their shared passion of dancing together.
scary besties who love each other very much but would never say that out loud <3
22 notes · View notes
mariacallous · 4 months
Text
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power.
Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts.
Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever’s departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team’s other colead, posted on X that he had resigned.
Neither Sutskever nor Leike responded to requests for comment. Sutskever did not offer an explanation for his decision to leave but offered support for OpenAI’s current path in a post on X. “The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership, he wrote.
Leike posted a thread on X on Friday explaining that his decision came from a disagreement over the company’s priorities and how much resources his team was being allocated.
“I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point,” Leike wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”
The dissolution of OpenAI’s superalignment team adds to recent evidence of a shakeout inside the company in the wake of last November’s governance crisis. Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an internet forum post in his name.
Two more OpenAI researchers working on AI policy and governance also appear to have left the company recently. Cullen O'Keefe left his role as research lead on policy frontiers in April, according to LinkedIn. Daniel Kokotajlo, an OpenAI researcher who has coauthored several papers on the dangers of more capable AI models, “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI,” according to a posting on an internet forum in his name. None of the researchers who have apparently left responded to requests for comment.
OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more powerful models will now be led by John Schulman, who coleads the team responsible for fine-tuning AI models after training.
The superalignment team was not the only team pondering the question of how to keep AI under control, although it was publicly positioned as the main one working on the most far-off version of that problem. The blog post announcing the superalignment team last summer stated: “Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” OpenAI’s charter binds it to safely developing so-called artificial general intelligence, or technology that rivals or exceeds humans, safely and for the benefit of humanity. Sutskever and other leaders there have often spoken about the need to proceed cautiously. But OpenAI has also been early to develop and publicly release experimental AI projects to the public.
OpenAI was once unusual among prominent AI labs for the eagerness with which research leaders like Sutskever talked of creating superhuman AI and of the potential for such technology to turn on humanity. That kind of doomy AI talk became much more widespread last year, after ChatGPT turned OpenAI into the most prominent and closely-watched technology company on the planet. As researchers and policymakers wrestled with the implications of ChatGPT and the prospect of vastly more capable AI, it became less controversial to worry about AI harming humans or humanity as a whole.
The existential angst has since cooled—and AI has yet to make another massive leap—but the need for AI regulation remains a hot topic. And this week OpenAI showcased a new version of ChatGPT that could once again change people’s relationship with the technology in powerful and perhaps problematic new ways.
The departures of Sutskever and Leike come shortly after OpenAI’s latest big reveal—a new “multimodal” AI model called GPT-4o that allows ChatGPT to see the world and converse in a more natural and humanlike way. A livestreamed demonstration showed the new version of ChatGPT mimicking human emotions and even attempting to flirt with users. OpenAI has said it will make the new interface available to paid users within a couple of weeks.
There is no indication that the recent departures have anything to do with OpenAI’s efforts to develop more humanlike AI or to ship products. But the latest advances do raise ethical questions around privacy, emotional manipulation, and cybersecurity risks. OpenAI maintains another research group called the Preparedness team which focuses on these issues.
11 notes · View notes
Text
WIP Wednesday
Smut with Agi and Astarion. cw pregnancy, breeding
“And this?”
“Ah-amazing, my sweet. Perfect. Keep going. Please.” Don’t stop. Never stop. Want to feel like this forever… With every passing second, he was more aware of her soft body against his---her heaving breasts against his back, the leg that was hooked over his thigh, her beautiful, perfect hand on his cock, and the warmth. Gods, the warmth. She’s so warm and soft and so sweet and so painfully gentle with me always.
“Now the real question is,” she bit on her lip playfully. “Do you want to finish inside me or not? Because I know you love it. Do you want to, love?” Agentha whispered, gripping his cock a little harder.
Astarion grinned. Just like the dream, except what comes next. “Oh yes, please! Though, I don’t necessarily need to.” Halsin confirmed what we hoped and prayed was true…a child, who should arrive in several months’ time. The most delightful surprise that I will be revealing at our wedding reception in two days. “You’re already so filled up, darling. Can you take more of my seed?”
She laughed softly. “You know I can, my beautiful love. You’d fill me with everything you have, and it won’t be enough…”
He growled, wiggling out of her hold and turning on his other side to kiss her as deeply and passionately as he could. I love her. I love her. I love her. I love her. “On your back, my pretty butter bun. Time to fill you again…” His eyes never left hers as she lay flat on her back. He then smiled as she reached for his face. “And again…” He kissed her once. “And again…” Twice. “And again…”
“Are we talking about tonight or how many children you’d like us to have?” Agentha teased, her hands drifting upwards towards his ears. Oh you naughty girl… “Because that’s four I think?”
He barked a laugh as he kissed her jaw. “Four to start. At the very least. Then we can go from there. I personally wouldn’t be opposed to ten—”
“Four to ten is a hell of a leap, Mr. Ancunin.” She grinned. Agnetha’s freckled fingers traced the shells of his ears. “How does one make it, I wonder?”
“It’s so utterly adorable when you tease me, sweetness. You’re a smart girl, darling. I think you know how one gets from four to ten…” His lips captured hers in a heated kiss. “You simply add six.”
11 notes · View notes
purplesurveys · 2 months
Text
1892
What's your favourite flavour of soda, pop or whatever else you call it?: I don't drink soda at all. I never enjoyed the fizziness it has and I always found it more uncomfortable/painful than refreshing.
What level of brightness do you usually keep your phone at?: It's usually at max brightness for which I'll get occasionally made fun of since it's such an oldie thing to do lol. The only time I'll turn it all the way down is when I'm already in bed trying to fall asleep.
Have you ever attended a religious or private school?: Private Catholic school for 14 years, which from experience is also the easiest and quickest way to turn your kid into a rebel hah.
Do you have any pets and are they cuddly?: I have two dogs and a cat – Cooper is on-off cuddly, meaning he'll sometimes be bothered and show it if you get too close. Agi's super affectionate; will plop right next to you and doesn't ever mind being squished and hugged and kissed, even if you do it aggressively.
Max is the newest member of the family and is surprisingly affectionate. I've had mostly negative experiences with cats so he's been great at changing that for me (him and Miki, but unfortunately Miki passed away a week ago, just mere days after we picked him up from the streets).
What's the worst job you've ever had?: I've had one job ever so I can't really compare.
How many cars does your household own?: We have three.
Do you know anyone named Edward or any nickname of that?: I have a cousin with a variation of that name; he was named after his grandfather (my great uncle in law).
What time do you usually have dinner?: When my dad is home, anywhere between 7:30-8. When he's gone, my siblings and I don't follow a set schedule as we all just kind of eat by ourselves.
Are there any cracks or scuffs on your phone?: Impressively enough, none. I'm notorious for ending the lives of all my past phones from dropping them endless times, but my iPhone 13 seems to be stubbornly indestructible. My screen protector has a bubble from the time I needed to get the screen fixed, but that's it and it doesn't even count as a scratch haha.
What's your favourite meat?: Pork.
Do you need glasses to read or drive or need them all the time?: I need them all the time otherwise I'm a hopeless case (cause?).
How did you celebrate your last New Year's Eve?: The day part was a little quiet, but we had media noche with extended family in the evening and we pretty much just talked and listened to music until we entered 2024. When our guests left, I had a few glasses of wine and had my family watch The Beyoncé Experience with me until I dropped my glass and nicked my foot, stopping the party altogether lol.
Is the internet fast where you live?: It's fast for me because it's the internet speed I'm used to, but I know as fact that ours is very shitty from a global POV.
What is your favourite meal of the day and why?: Dinner. Typically the largest, and the best meals are usually reserved for dinner, so.
Do you like long surveys or short surveys better?: Such a safe and boring answer, but medium-length surveys are best for me – anywhere between 40 to 60 questions.
Xbox, PlayStation or neither?: Playstation.
Have you ever been to a cocktail bar?: Lots of times.
Do you consider yourself a fast typer?: I am, yes.
What's the best amusement park you've ever visited?: Universal Studios in Singapore was a lot of fun.
Do you keep the cabinets in your kitchen and bathroom organised?: Yes.
Have you ever had a romantic fling?: Nope.
Are you a very forgetful person?: Just about certain things, but I'd say in general my memory is pretty sharp.
What was the last movie you saw in the cinema?: BTS in Busan, 1 year and 5 months ago haha. I never go to the cinema.
How old were you when you got your first car?: I was 17 when my dad handed me the Mirage.
What colour is your shampoo?: White.
Are you doing anything tomorrow?: Just work, but Friday left me in such a pissy mood so I'm not looking forward to tomorrow very much. I might also need to use my lunch break to take Max to the vet for his first set of vaccines, as we weren't able to do so today.
Do you know anyone who's gotten pregnant over the age of 40?: Yes.
Who does most of the grocery shopping in your home?: My parents.
Have you ever been approached by someone in public preaching about religion?: Those people roam around my university all the time, but fortunately I never encountered any of them during my time there.
Are you listening to music right now? If so, what's the theme of the lyrics?: Being okay and accepting yourself despite your fuckups, because you are all you have. Uhgood by RM.
What was the last thing you had to eat?: Pizza, shrimp, and lumpia. Pretty damn great combination if you ask me hahaha.
3 notes · View notes
general--winter · 1 year
Note
Could we please get general relationship headcanons for Ann from Persona 5?
author's note: Ann my beloved. My best friend in the whole wide world. I see a lot of myself in her actually, if I wasn't so reserved lol. I bestow upon you the headcanons. Please, enjoy and thank you for the request!
rating: general
fandom: persona 5
pairing: ann takamaki x gn!reader
warnings: mentions of objectification of women's bodies
word count: 671
summary: What would Ann be like in a relationship?
Oh gosh Ann, I love her with all my heart. There’s a reason her element alignment is Agi (fire), she’s a passionate and driven woman who will stop at nothing to see her way to her goals. So, I imagine that when her mind is set on pursuing someone romantically, she would be the one to pull out all of the stops, but only if she’s already comfortable around them. Perhaps as a friend or acquaintance she has a positive history with, that she can trust. I definitely don’t think she would want to be in a relationship before being friends with someone, just to make sure their vibes are proper.
Chocolate on Valentine’s Day? She’s suave as all hell about it. Flowers? Any time you meet up to hang out, she’ll sneak to the florist and get you a single rose (or whatever your favorite flower is when she figures it out). A nice night out? I’d imagine once her modeling career takes off, and why wouldn’t it, she would treat you to a delectable and fancy all-you-can-eat buffet and take you on a walk through Inokashira Park.
Her love language is acts of service and gift giving, these two especially going hand in hand. She loves to do things for people, to show them that she cares about them through what she does. That doesn’t mean that she doesn’t also appreciate a good gift though. Perhaps buy her that makeup collab set she’s been talking about all month? Or the new album from her favorite band that just came out? Just to show you listen to her and care about her interests as much as she cares about your own!
Despite her forward nature when pursuing a romantic interest, she’s pretty shy about physical contact. The first time you hug her, she’s stiff as a board and takes a minute to relax into it. You initiate your first kiss together, and it flusters her beyond all belief. This stems from her negative experience being objectified as a model, especially as a woman with traditionally Western features (naturally blonde hair and blue eyes) in Japan. She definitely has to be eased into it very gently, and reassured consistently along the way that you’re not there just for her physicality, but for her as a complete person. Either way, she’s not really touchy-feely, but if your love language is physical touch, I guarantee you that she will do her best to make sure you feel as loved as you make her feel.
I think the Phantom Thieves would all really enjoy your presence when you’re introduced to them (if you don’t already know them), except the way you make Morgana bicker way too much. If it is your first time meeting Ann’s friends, their opinion is incredibly valuable to her; she trusts their instincts as much as she does her own. It would be best to make a good impression! Especially with Shiho too, I imagine that she’s Ann’s personal shadow defender and will hurt you if you do anything to Ann. (Don’t let the kind and outgoing facade fool you after her recovery from depression, Shiho can and will throw hands for Ann for playing such a huge role in saving her life.)
Overall, day-to-day with her I imagine to be very exciting and fun. Ann is always down for an adventure to the beach or the amusement park to spice up the day, but she’s more than content staying bundled up inside or just hanging out at Leblanc all day with you. Her personality is unpredictable and she can have an incredibly short temper, but overall it's usually in jest. It’s easy to tell when Ann is being sarcastically pissed off for comedic effect or when she’s really angry by the tone of her voice. She’s incredibly impatient, and you catch yourself teasing her until she’s incredibly flustered and “enraged” about her overreaction to the crepe stand’s wait times. She secretly enjoys it, not that she’ll ever admit it.
29 notes · View notes
talltoontales · 6 months
Text
MARCH Monthly Archive/ APRIL Update
Well, March was a better month for my writing, at least. Threw in two extra stories, so now I'm up by three weeks, and only had one story last month that I absolutely did not like. I'm also taking a few more days when writing some stories, and I feel like the quality has improved because of it.
On the more positive side, I blew up outta nowhere last month. Mostly on Tumblr, but Reddit showed me a little love too, and I gotta say it really made my month. I was just goofin' off at work when I saw a notice on my phone, and now there's definitely a video of me dancing like a kid on the security cameras. So thanks for that!
I'd also like to shout out to @agirlandherquill (Tumblr) & @kentuckyhobbit (Tumblr) for being my first followers! I hope you both enjoy the ride!
I'm also doing a new thing with these archive posts where I give you my "Best Foot Forward" story. Basically, if you had to read one story from me last month, the BFF would be it.
That being said, let's get to the archive:
|| || || || || || ||
S10/Wk-10: Ruth-Less
Stated Writing: 03/07 Prompt: Your house is haunted by a ghost, and, upset it can't scare you out, you find it trying to be passive-aggressive now. Prompt By: u/NinjaProfessional823 (Reddit)
S11/Wk-11: Blinded Light
Stated Writing: 03/10 Prompt: "You don't even know what's out there!" Prompt By: @seaside-writings (Tumblr)
S12/Wk-11: Scale-Bound [BFF]
Stated Writing: 03/13 Prompt: You are a mighty dragon king, the strongest magical being. You're determined to conquer the world, but a hero turned you into a cute small dragon. Now, you are looking for a human girl to help you get strong again and dominate the world. Prompt By: u/basafish (Reddit)
S13/Wk-12: ToonMan #4 Vortech: Clog in the Machine
Stated Writing: 03/21 Prompt: "You called me." / "And you really came." Prompt By: @creativepromptsforwriting (Tumblr)
S14/Wk-13: Pandora's Box
Stated Writing: 03/26 Prompt: A mundane and antiquated sub-agency in the US Government was the 1st to stumble across AGI. An internal investigation determined a press release unnecessary. Unaware of the power they control, 12 bureaucrats are assembled to determine the policy and implementation of their new tool. Prompt By: No-assistance1503 (Reddit)
S15/Wk-13: Kung Fu Panda 4 Rewrite
Stated Writing: 03/29 Prompt: Kung Fu Panda 4 felt a little lacking, wanted to try my hand at maybe making it better. Prompt By: Me (ToonMan)
|| || || || || || ||
Previous "Best Foot Forward" Stories:
JAN [S4/Wk-02]: ToonMan #1: Comical Crime Fighter
Started Writing: 01/12 Prompt: You have the superpower of slapstick comedy. Prompt By: u/Paper_Shotgun (Reddit)
FEB [S9/WK-05]: Not Enough Time
Started Writing: 02/02 Prompt: [TT] Theme Thursday - Exhaustion Prompt By: u/AliciaWrites (Reddit)
|| || || || || || ||
Tumblr media
|| || || || || || ||
Looking back, March was actually a pretty solid month. Let's see if I can keep it up going into April.
Stay safe, drink plenty of water, and be kind to yourself and others!
ToonMan, AWAY!
5 notes · View notes
littleladymab · 7 months
Text
FebruarOC - Kaedmon & Uriah
This is just their joint drabble because it was long and I wanted to pester you all one final time this month :') 
And because I forgot to say it on their individual posts, while I don't have a playlist for either of them, I do have one song for each of them that I stick on loop while writing. 
Uriah: The Dead South's "Yours to Keep" Kaedmon: Chase Petra's "Pacific"
This does just sort of cut off because i got too lazy to continue and didn't want to be here all day LOL 
++++ 
Uriah couldn’t believe this.
Not only did the rebel girl get the jump on him, but then they were both caught by the most incompetent smugglers that he had ever seen.
Yeah, well, who’s incompetent now, he thinks darkly. At least they tied his hands in front of him like the amateurs they are.
And no one knew he is ISB. Which, well, if he grew tired of playing the part he could snap the bindings easily and use his hidden comm device to call for reinforcements. His cover would be blown, of course, and it would be a stupid way to get a mark on his short but unblemished records.
He could check one roughshod smuggler group off the list and one rebel agent, too. A net win, in the grand scheme of things.
Uriah’s gaze shifts over to rebel; there’s still a twig in her hair and a possible shadow of a bruise on her cheek, though it’s hard to tell in the dim lighting of the cheap lanterns. She had a Fulcrum logo stitched into the seam of her jacket, but he has a hard time imagining that she’s the infamous rebel spy that ISB has been building a dossier on. If he has to categorize her, it would be somewhere above girl in over her head but maybe on par with the rest of these smugglers.
She’s probably just trying to catch the Rebellion's attention in a desperate grab to get them to notice her.
He frowns when he notices the rapid rise and fall of her chest, ragged gasping breaths even in unconsciousness. She had been wearing a respirator, hadn’t she?
“Hey,” he tries, but his voice comes out more like a croak and he has to cough to clear it. “Hey!”
Four heads turn in his direction, the others opting instead to ignore him.
“Are you trying to kill us? She can’t breathe. Give her back her respirator.” Uriah gestures with his bound hands to the rebel a few feet away.
She does a great job of gasping like a fish. Perfect. Right on cue. Couldn’t have asked for a better scene partner.
“What’s that to you?” one of them asks, the hulking Devaronian that Uriah has flagged as the leader. “Thought you and her weren’t working together.”
They’re most certainly not, but that’s neither here nor there. “I can not work with someone and still care if they live or die.”
Which, mostly true, but they don't have to know that.
He could really do with one less rebel wannabe in his neck of the woods, but it will be much easier to get out of this situation if she isn't dead.
Besides, she’s just a girl — barely 20 if that. Probably close to his sister’s age if he had to guess. And if she wants a purpose in life, well, she helps him get out of here and he can introduce her to people who can give that to her.
Two birds, one less rebel. Seems like a win win.
“Besides, if she goes missing, you’ll have rebels swarming all over these woods looking for her,” Uriah presses as he starts to lose their attention. It’s just bullshitting because he knows they don’t have the numbers to spare for someone — pretender or not, Fulcrum or not. But these suckers don’t know that.
Sure enough, when they turn away from him this time, it’s to huddle together and whisper plans. He tries to listen in, the implant in his left ear picking up their hushed voices, but fuck him they’re speaking Devaronese. Alright, props to them for that one, he didn’t see that coming. They’re systems away from Devaron and they’re a bunch of one-bit smugglers, how was he supposed to know they’d speak a language he couldn’t.
Note to self, he thinks wryly. Upgrade implant.
Still, whatever their discussion, he can see the agitation in their body language and the tone of their voices carry the argument just as well as their words.
Uriah waits, tense, wondering if they’re going to decide to cut their losses and kill both prisoners or actually listen to him.
He’s half surprised when they toss the respirator in his direction, watching its lazy arc through the air and frowning as it lands somewhere between him and the rebel. Should have expected that.
“If you want to save her, be our guest,” the Devaronian says, and then the gang all return to their dinners.
Well, at least it’s something. He’ll push his luck with food tomorrow if they forget to feed their prisoners this evening, but for now he can at least make sure the girl won’t die.
Uriah shuffles around onto his knees, careful to not seem too competent even with his hands bound. He picks up the mask, studying the structure in the poor lighting before he finds the power button. The internal mechanics begin to hum and a small puff of air ghosts out over his hands, which is as good as he can get it for now. She’ll have to handle the rest when she wakes up.
He continues his awkward trek over to her side and unceremoniously holds the respirator over her face. He doesn’t want to run the risk of her tossing her head or rolling over and the mask falling off, nor does he want to cross any personal boundaries and fasten it on properly over her head.
That’s when he feels the sharp threat of a knife right above his hip, somewhere around his kidney and intestine.
He glances down and finds her glaring up at him. He hadn’t realized how pale her eyes were — the nearly colorless brown of mica or the smoky quartz crystals he and his sister would find in their backyard. “Huh,” he says, more to himself than to her. “I guess you’re not so stupid after all.” Uriah gestures with his head towards the knife still pressed to his side. “Were you faking it?”
She rolls her eyes but makes no move to take the mask from his grasp, keeping them both in a vulnerable position. So, no. Likely not.
“Where’d you get it from?” he asks instead.
“They’re not very observant,” she answers, her voice husky and muffled beneath the mask. “Oh, good, you didn’t turn on the vocoder.”
Truth be told he didn’t know he hadn’t. He hadn’t realized it was two separate switches. “Can’t have you vocalizing any of our escape plans.”
“Oh, it’s our now?” she growls and the knife presses just a little closer.
“Knock that off,” Uriah hisses, finally giving in to the urge to squirm away and abandoning the mask on her face. “You’re a poor attempt at a rebel if you think you can get out of here without my help.”
The look she gives him is incredibly skeptical, as if she wasn’t just unconscious (or at least pretending to be) for the better part of an hour. Really he should have tried to wake her up sooner just to ensure she didn’t have a concussion, but there’s only so much responsibility he’s willing to take for her.
Then again, she did manage to grab a weapon from one of the smugglers during the skirmish and keep it hidden from them at the price of her blaster and her respirator.
“Look,” he finally says with an eyeroll of his own. He can keep his voice low enough, and he can hear her despite the respirator and the chatter from the camp. “Temporary truce?”
“With an Imperial?”
His head whips towards her so fast that his neck twinges. “What?”
The rebel struggles into an upright position, knife having vanished to who knows where, and fixes her respirator in place. Without the strain on her face while struggling to breathe and only the top half of her face scowling at him, she looks older somehow. “You think I’m stupid?”
He hesitates, then slowly says, “It had occurred to me, yeah.” He wants to know how she knows, but now isn’t the time.
“At least you’re honest.” Her head moves in a slow study of the camp and the surrounding locale. He can see the calculations behind her eyes, putting together what he already had: It would be easy to get away and keep ahead of the smugglers if they pursue, but pick the wrong direction and they’d be lost for days.
Well, technically. He could ditch her and call his back-up — or since she’s got him read, he could call and bring them both in.
Uriah watches for the moment when her thoughts add up to the fact that two is better than one and that she would need a partner in this escape attempt. “Temporary truce?” he repeats, holding out his bound hands.
Her eyes flash to his, almost as sharp as her knife, and she holds his gaze for a long moment before sighing. “Temporary truce,” she agrees, and touches the tips of her fingers to his in an awkward handshake. Then, with another, more dramatic sigh, she says, “I’m going to miss that blaster.”
“We don’t have to abandon everything,” he offers, settling back against one of the crates that form the boundary of the camp.
“The best option is to cut our losses and just go.”
Ugh, rebels. He doesn’t know if she just wants to avoid violence or she’s still keen on trying to get in on their operation. Either way, it could cost them if they don’t take at least the Devaronian.
So instead he says, “I’ll wager you the blaster.”
Her brow furrows and she considers him. “I lose the blaster either way.”
“Not if you get it first.”
She snorts, and he thinks there might be some amusement in there.
4 notes · View notes
polyg0n1zed · 1 year
Text
Tumblr media
This was acfually made two months agi I just posted it now cuz of my habit of tyring to make sure that tgere's no mistake ans oh believe me I wanted to post it earlier but revently I found a mistake (specifically layer-related) so I had to fix it first. It"# nof that nkticable but it annoyed me and I don't want people to see it. Also my Opposition was supposed to be more muscukar but I don't know how to draw muscular body I'll try to make them more buff next time.
I swear if I see another mistake after I post it I'm gonna lose it 💀💀
(I copied the caption from instagran lol idc abt the typis tho. Erm first time posting on Tumblr I hope I don't die)
19 notes · View notes
Text
AI
Okay I need somewhere to rant and vent and ramble about the shit that is Large Language Models and generative AI (which has become kind of synonymous with AI, whatever that term is supposed to mean anyway). I am so happy to see the writer's strike here on tumblr being supported and it's healing for me to see that we're not putting up with that shit.
To explain where I'm coming from: I work in academic writing at a writing centre, although my background is in historical literature. I started on a doctoral thesis about medieval religious literature but quit because I couldn't find meaning in what I was doing, it felt really far away from everyday life. Of course I had some reasons learned by heart of why it's so important: to understand today's culture, you have to understand past cultures, it helps to understand that everything is arbitrary and so on and so on, and I stand by that, but it wasn't enough for that specific research question I was working on to carry me through. So I quit, and there was this job opportunity for a 2-year project about AI in academic writing at a writing centre, because they were prophetic enough to see that this might become a thing in the future. "Wow, that's sounds pretty interesting and pretty relevant to the near future!," I thought, which was obviously quite attractive to me. I thought: I will use this 2-year project to sort myself out and see what I actually wanna do.
And for a year, it was splendid, I learned a lot about academic writing (I actually believe that the reason I scored the job was because of my knowledge of historical literature and how concepts of authorship can change and all that stuff), I spoke to people in different faculties about what might be coming (I was working with GPT-3 at the time, mostly using Jasper.ai), and everyone was like "wow, interesting, we'll deal with that when the time comes". Which was fine for me, I was learning a lot about other stuff, and also doing a lot of reading, but there was only so much I could tell people about the use of text generating AI before it was actually, you know, *in use*. Everything was hypothetical and there's only so much you can say about that. (And I'm not in a research position.) I was really just trying to figure out the most important questions, but I could only pose them, not find any answers. Anyway, that might be a topic for another rant. So I was having a kinda good time.
Sometime in September 2022 (I started in March), I was at a conference with some collegues doing a workshop on that topic, and afterwards one my colleagues said "we're one year too early". Turns out she was wrong about two months, because then ChatGPT was released. I didn't get the buzz at first (for me, it was not that much different than using Jasper before), but suddenly *everyone* was talking about it. We actually gained some media attention because obviously, since we had been working on that topic for a year already, we could deliver some insights and talk about our experience up to then. February, March, and April are a blur for me, suddenly there were so many requests and stuff to do that I was on the verge of burn out (or maybe still am, at least I don't feel good). That's okay, of course the attention was also nice, but I had to learn to say "no" to requests (wow, imagine that. no one had wanted to hear about the topic of my doctoral thesis, not even me by the point I quit lol). So that's been stressful, but it's not what's stressing me out the most.
It's the shit show that I learned that whole generative AI thing is. I got into some doomscrolling back in April, when people like Geoffrey Hinton came forward talking about existential risk, something that had been science fiction to me before. I was in a bubble, but I kid you not, I was doomscrolling as bad as when the Russian war against Ukraine started. (I am prone to doomscrolling so that's on me as well.) People talking about AGI and ASI and the singularity being near, and I was like wtf is going on. Then – for my own sanity, and I can't thank them enough for this – I started following women like Timnit Gebru and Emily Bender, and that's when I started to learn a lot more about the ideologies behind "AI" and those in Big Tech that work on it. And I am so furious. And while people are talking about "oh no, will essay assignments be obsolete in the future?" I am like, "can we please talk about the ACTUAL issues with this? how harmful this all is? that no one asked for this?"
I am not against AI in general (actually, I'm against using the term, but it's something that I have to use to for comprehensibility's sake). For medicine, there are wonderful things we could use it for. Actual solutions to actual problems. Who wanted creative writing to be taken from humans? This is consumerist dystopia: No one has to produce anymore, we can all consume. Consume what? And do what in the mean time?
"AI can help me write emails faster!" So yeah, what happens when my AI mail assistant emails you, for it to be read by your AI assistant to respond to my AI assistant? What *are* we doing in the mean time?
And those people working on AGI have answers – we'll all be living blissful lives with our transhuman brains being digitized in some kind of utopian heaven. Oh, but also, it could go terribly wrong, but whatever. Also, this is not like, conspiracy theory. You can go on OpenAI's website and look it up for yourself.
Some of this is not new. This has been discussed about automization and technology before and before. And generally, I guess, our lives have become more comfortable. And I am an optimist: I believe most people will use anything to be more creative and create more meaningful things, and maybe creating will become more accessible. This is something where my historical view also helps me, because there have been debates about any new technology of how people will become more stupid because of it, and generally it wasn't the case. But we cannot stop asking the question: what is this technology for? for whom? who profits from this? who asked for this? what kind of problem is this technology supposed to solve? or... do we just do it – because it's possible?
There's so much more, but I'll stop for now. Maybe I should just stop engaging with this so much, but it feels as though I'd be doing a bad job then, because I'm not staying up-to-date. In workshops, I try to be neutral, but of course it slips through. I never teach people "how to use AI" anyway, it's easy enough, you can do that by yourself, that's why it's a chatbot, it's not magic, you don't need me. I try to teach people to be able to make the decision of when and if they should use AI, and for what.
But if you ask me: Don't use it. Fuck this. And I haven't even touched upon how it is all theft. If people have to rely on AI to get their jobs done, because it's the only way they won't burn out – and I talked to my sister about this, who is a therapist, and said that if the AI would be able to do her job, it would be awesome, because there's not enough therapists and she has burn out herself – there's something wrong with the system. There is something deeply wrong with the system and AI is not going to fix it, it is going to make it worse.
2 notes · View notes
theadventurerslog · 2 years
Text
King’s Quest: Quest for the Crown | Part 1
Tumblr media
The Adventurer’s Log
King’s Quest I: Quest for the Crown Part 1
Release Date: 1984 (original version)
Introduction
Ahh, King’s Quest, the well-known series from Sierra, but while it’s a series I’ve known of for a long time, it’s a series I had never played nor even seen played until these past couple months.
I came across the Youtube channel, Such Minutiae, and started watching their Lets Plays of the series. Come King’s Quest V and VI I decided I was interested in at least giving those two a play. On GOG they come bundled with KQIV and I figured, why not? I could give IV a shot too. Then this idea to start blogging my efforts struck and I thought, you know what? Why not start right from the beginning and grab the first three games after all, too? Throw in them being on sale and here we are.
Now, as noted, I watched a Let’s Play, so this won’t be a blind play. However, this particular game is a different version from what I watched. They played the Sierra Creative Interpreter (SCI) version which was a sort of remake/remaster. I’m playing the older AGI game and aside from graphical differences there are other differences as well. I probably won’t be able to spot a lot of them, but there is one different puzzle solution I’m aware of and I’ll make note of it when I come to it along with anything else that sticks out to me.
Having already watched the games be played I expect this will go a lot smoother than it otherwise would, but that I will still have some hangups. There is some real bullshit in these games, so it’s nice to go in prepared, accept that and enjoy the ride. And die a lot. I intend to keep a death counter. That will be noted at the end of the posts.
Let’s get to it, shall we?
The first thing I noticed when I loaded it for a simple quick test, was that unlike the SCI version there is no intro scene. It dumps you right into the game and you gotta go into the castle yourself and talk to the King yourself to know what you’re doing.
So the first step is to go see the king and so the quest begins.
Tumblr media Tumblr media
And so the quest ends
Come on now, I couldn’t not throw him in the moat. Also, the death music! It’s different in the SCI version. I know this music because it was also used in KQ II. It starts sombre with Chopin’s Death March and then… oh and then it transitions to I don’t actually know what it’s called, but it sure clowns on you. Please just go to this link https://youtu.be/AWvZNAv4_B8?t=527 I have it timestamped to the right place already and know my suffering. The most taunting thing ever. I love hate it.
Okay back to it for real. Go see the king and be told what your quest actually is:
Tumblr media
If you want further backstory and more details, you better read that manual.
From the manual in summary:
The kingdom of Daventry was ruled over by King Edward and his nameless, but lovely! Queen. They held three magic treasures that served to help Daventry maintain its peace and prosperity:
A Magic Mirror that reads the future so they were able to use it to stave off disasters and avoid things like planting crops right before frosts.
A Magic Shield that is supposed to make the bearer invincible and his army always victorious.
A Magic Chest that always remained full of gold.
Then disasters start striking. They want an heir but haven’t been able to have a child. A sorcerer comes and offers a solution but wants the mirror in return. They consult the mirror and see what they think will be their future prince (spoiler alert: It’ll be Sir Graham). They agree. No child is ever born. The Sorcerer ran off with mirror and placed it under guard by a beast.
Later the Queen falls ill. A dwarf offers a remedy that looks like it’ll work but wants the shield in return. The King agrees. The dwarf runs off with the shield. The Queen dies.
The King grows lonely as more years pass, but eventually Edward comes across a beautiful lady, the Princess Dahlia (she gets a name) in need of help. He saves her. They plan to wed and now it’s the night before the wedding except whoops. She was actually a witch and planned to steal the chest all along which she successfully does and off she goes too.
More years pass with the King and Daventry falling into despair and disrepair. King Edward realizes he may die soon and summons his knight, Sir Graham, realizing it was him he saw in the mirror. If Graham can gather back Daventry’s three treasures he’ll prove himself worthy of the crown and become the King foreseen in the mirror. Time to go Questing.
--
Once you see the king it’s time to set out and start exploring. I set out, I pushed a rock and died because I pushed it from the wrong side and got crushed. And I forgot to save, so I basically had to start again, although ScummVM autosaved me in the hallway in the castle, so I got to skip entering the castle.
Then I set out again and remembered to save this time. Save regularly and keep multiple saves. Now something cool that King’s Quest does is having some puzzles with multiple solutions. There is a points system and to get the highest points you need to do the optimal solutions. If you don’t care about points you’ve potentially got other options.
After finding a dagger and a pouch of diamonds I quickly encountered a flying condor. This condor is mandatory but unfortunately it’s random as to when and where it will show up. I managed to jump and catch the ride while I could, except I ended up in a dead end because I didn’t have what was needed to get past a big mean ol’ rat. I did try offering it the diamonds, which worked, except I think it actually lost me points and I would have ended up stuck again anyway. I reloaded a previous save.
Tumblr media
Thanks for the dead-end, condor, but fly on majestically.
My other activities included more exploration and finding more items, getting mauled by an ogre at least once or twice and managing to run away another time, some sorcerer encounters one of which left me frozen in place and essentially dead, got flown off by a witch and eaten, stolen from by a dwarf a couple times and got some temporary protection from a fairy.
I found a four-leaf clover and a walnut. I climbed a huge tree and found a golden egg. Surprisingly for how easy it is to die in other ways, falling off the tree doesn’t kill you. The egg doesn’t break either. Shockingly nice of them.
Moving on, I helped a starving woodcutter husband and his wife and got a fiddle in return. I got lucky in doing those two activities, and along with finding one other item, I earned everything I needed to deal with the rat and the rest of that area and shortly after getting those needed goods the condor showed up again. So, this time I was able to handle everything there and got my first treasure: the Magic Shield.
I was able to find the witch’s house, a gingerbread house naturally, and deal with her accordingly. Actually, I had to do it twice, because fumble fingers for me killed me again oops. This place was another big difference from the SCI version. In the SCI version there are gingerbread figures outside the house and of course the house just looks better. If she catches you Graham gets turned into gingerbread and she puts you outside with a punny Graham cracker death message. In this version you just get put in a cell and know you’re about to get eaten and that’s that. I did miss that in this version.
Tumblr media
Not exactly gingerbread-y on the inside. In the SCI version that oven is a cauldron.
I still have a bridge troll to deal with, but I have the means to do so and that’ll open up more areas to explore. I also found a well I haven’t checked out yet as well. I’ll probably try for the well first then go for the troll.
Tumblr media
I’m coming for you, troll.
I’ve been mapping as I go and I’m very glad I’m doing so. I’m sure I’d be getting lost if I hadn’t. There are many screens and quite a few are pretty nondescript.
With the huge caveat that I’ve watched this played before, I’m having a pretty good time. Sure, it’s primitive. It’s easy to die, ridiculously so at times. It can be easy to screw yourself over into an unwinnable state in these games, but if you go in with that knowledge and stay on top of your saves, it’s entertaining.
With deaths by: moat alligators, falling into unswimmable water, ogre, sorcerer, witch, rock crushing I am currently sitting a death counter of 15. I’m a little worried I may have forgotten to update it a couple times so it may be a death or two higher than 15. Three of those were deliberate.
I probably only have one, maaaybe two sessions left.
Death Counter: 15
Time Played: 1hr 35min
Current Points Score: 86/158
7 notes · View notes
oceannahain · 2 years
Text
Annotation - CFB
This dossier is a continued exploration of the artists, theories, and cultural contexts that have guided my practice throughout the last two semesters within Critical Frameworks A and B. There is an ongoing theme of figuration throughout this blog, but in the past few months I have begun to focus on what role the body plays, how its appearance changes social contexts, and what it means to depart from the body entirely. My current interest lies within themes of the fragmented body, the abject quality of transition, and the limitless potentials of the body malleable.
A fundamental shift in my practice has been the move from depicting the idealized future outcomes of the body to showing it in a transitional, in flux state of mutation. Evidence of this change appears throughout the dossier, moving from artists like Agi Haines and Joanna Grochowska earlier in this semester to people like Asger Carlsen and Doreen Garner in the latter half. Haines and Grochowska’s work visualizes the idealized outcomes of the future body, produced in a lab-like setting.This scientific approach of depicting the “perfect specimen” lacked the story of how they got there, distancing it from the grotesque, active throes of mutations happening within my work. 
This shift to viewing the body as an active, malleable material in my own practice enticed a desire to find artists that were reducing bodies down to their truest form of materiality, flesh. Carlsen fragments and abstracts the body in a way that allows it to be contextualized as both human and sculpture, exposing its materiality without entirely departing from reality. While Garner’s work uses a similar approach of reduction, she exaggerates the grotesque nature of these forms to create a platform to expose and navigate the boundaries and constructs we apply to the body. 
Another theme that has found its way to the forefront this semester is the role of the artist and what happens when the line between artist and material is blurred. This has offered a discussion of agency in relation to the artist, the body, and the material that is navigated in the works of Carolee Schneeman, Archie Barry, and Philip Brophy. Schneeman offers her body as the work itself, with no differentiation between artist and material. In contrast, Barry creates two separate entities out of the same body, one being the artist as a conductor and the other, his arm, becoming the living being that is the performance. Brophy removes himself from the work almost entirely, taking the role of coordinator or curator while passing full agency to the viewer. As someone that uses my own body as the content of my work, these three artists have become integral on how I navigate my own role in the process and product of my practice. 
Although my own practice remains in the painting medium, much of this dossier investigates artists working within other mediums, more specifically, sculptural, performance, and photographic works. I find looking outside of my medium to be highly informative and it offers me the opportunity to recognize universal themes conducted through many modes of working. 
7 notes · View notes
petnews2day · 2 years
Text
The story of Félicette, the first cat in space
New Post has been published on https://petnews2day.com/pet-news/cat-news/the-story-of-felicette-the-first-cat-in-space/
The story of Félicette, the first cat in space
Tumblr media
The chances are, if you saw a crossword clue ‘Animal that flew into space (3)’, you’d think of Laika and write, “Dog”.
And it might be right, but there’s another correct answer.
In October 1963 a small, black and white cat called Félicette travelled where no feline had gone before – or has gone since.
But why is Félicette overlooked when Laika is so loved? Perhaps because her rocket looked like a firework compared to Laika’s powerful booster.
Or maybe it’s because she only flew to the edge of space, on the same kind of suborbital flight that billionaires now pay a fortune for.
Tumblr media
Félicette, the first cat in space, is strapped into a launch seat to be loaded into the Veronique rocket. Credit: ina.fr/youtube.com
How Félicette was chosen for spaceflight
Félicette’s story began in 1961 when, following the superpowers’ successful flights putting animals in space, France decided to stage a series of missions of its own, using cats instead of dogs or monkeys, hoping to collect data that would allow them to launch their own astronauts later.
Fourteen female cats were subsequently acquired by French CERMA space scientists.
To prevent the scientists from becoming attached to them, the cats were given numbers instead of names. They were also fitted with electrodes to record their brain activity.
The cats underwent ‘astronaut training’. To test their reaction to being confined, they were put into small containers for long periods.
Tumblr media
They were also spun around in a centrifuge, simulating the G-forces of lift-off and re-entry.
Eventually six cats were chosen to go through to the next stage, including a tuxedo cat known then only as ‘C341’.
Laika flew into orbit atop a tall, chunky Sputnik rocket very similar to the Vostok booster that would carry Yuri Gagarin.
But with its tail fins and pointed nose, C341’s slim Veronique AGI booster looked more like a child’s drawing of a rocket.
It didn’t even use a conventional launch tower. Instead, its weight was supported by a quartet of long fins, like the legs of a Christmas tree stand.
Félicette’s launch day
On 18 October 1963, just after 8am local time, the Veronique rocket blasted off from the Interarmy Special Vehicles Test Centre in the middle of the Sahara Desert in Algeria, carrying cat Félicette with it.
Coccooned inside her capsule, little C341 experienced 9.5 g, almost double the g-force the Apollo astronauts experienced as they launched to the Moon.
After reaching an altitude of 157km, C341 was only ‘in space’ for around five minutes. Inside her capsule she had no view of the Earth.
As the rocket began its descent, the capsule separated from the booster.
C341 experienced ‘only’ 7 g as she fell, until her capsule’s parachutes opened.
Thirteen minutes after lift-off the cone-shaped capsule landed, leaving C341 hanging upside down with her bottom sticking up in the air – a very undignified pose for any cat – until a helicopter arrived and she was retrieved.
With C341 safely back on Earth it was time for France to let the world know about her flight – and finally she had a name too.
In the absence of an actual name, the French media nicknamed the space cat Felix, after the naughty black and white cartoon cat from movies and television.
But C341 was female, so CERMA took the nickname and changed it to the feminine version: Félicette.
Sadly, like Laika’s, Félicette’s story did not have a happy ending.
Tumblr media
British cat fan Matthew Guy launched a successful Kickstarter campaign to honour Félicette with her own bronze statue.
Two months after landing she was euthanised so the scientists could carry out a postmortem to see how her body had been affected by her flight.
They later admitted they learned nothing useful from the autopsy. No more cats flew into space, and France never launched its own astronauts.
But although her story is less well-known than Laika’s, Félicette hasn’t been completely forgotten: in 2019 a lovely statue of her was unveilied at the International Space University Campus in Strasbourg.
Next time you’re observing, perhaps you could take a moment to look up at the night sky and think of her too.
This article originally appeared in the October 2022 issue of BBC Sky at Night Magazine.
6 notes · View notes