#AI Self-Learning
Explore tagged Tumblr posts
Text
#AI Autonomy#AI Evolution#AI Governance#AI in Defense#AI in Intelligence#AI in National Security#AI Oversight#AI Self-Learning#AI Surveillance#AI vs. Human Control#Artificial Intelligence#Cyber Warfare#ECHELON#Five Eyes Alliance#Government Secrecy
0 notes
Text
i hate to say it because i'm neurodivergent and a chronic-pain-haver but like... sometimes stuff is going to be hard and that's okay.
it's okay if you don't understand something the first few times it's explained to you. it's okay if you have to google every word in a sentence. it's okay if you need to spend a few hours learning the context behind a complicated situation. it's okay if you need to read something, think about it, and then come back to re-read it.
i get it. giving up is easier, and we are all broken down and also broke as hell. nobody has the time, nobody has the fucking energy. that is how they win, though. that is why you feel this way. it is so much easier, and that is why you must resist the impetus to shut down. fight through the desire you've been taught to "tl;dr".
embrace when a book is confusing for you. accept not all media will be transparent and glittery and in the genre you love. question why you need everything to be lily-white and soft. i get it. i also sometimes choose the escapism, the fantasy-romance. there's no shame in that. but every day i still try to make myself think about something, to actually process and challenge myself. it is hard, often, because of my neurodivergence. but i fight that urge, because i think it's fucking important.
especially right now. the more they convince you not to think, the easier it will be to feed you misinformation. the more we accept a message without criticism, the more power they will have over that message. the more you choose convenience, the more they will make propaganda convenient to you.
#personal#this also applies to ai art and stuff. like#artists and crafters and non-ai users took the time space and energy to learn things#bc we are actually LEARNING them. and it takes actual SKILL.#i know the skill is long to learn and often annoying. i still get frustrated about my art bc it's not good#but i do it myself. bc i respect that it IS a skill.#ai writing a book for you is not YOU learning how to write a book. and it took me a lifetime to write a book. i get it.#ai drones running a marathon don't run the marathon for u#there are things i cannot due to my disability. lol marathons being 1. there are things u can't do either#this is about stretching yourself in the ways that are healthy and good for you.#ai learning for u in ur classes is NOT healthy. u are not learning.#''but otherwise i won't pass''#first of all that's a self-defeating prophecy. and many of us who thought we wouldn't pass DID pass#and secondly. CHALLENGE urself. ur paying for college anyway. don't pay just to let AI learn for u.
3K notes
·
View notes
Note
Erina and Sophie....
erina and sophie..... perhaps even sophie and erina....
#persona 5#p5#asks & requests#p5t#p5t erina#p5s#p5s sophia#sophia persona 5#persona 5 sophia#persona 5 strikers#persona 5 scramble#persona 5 tactica#i will fill the p5s tag my fucking self if i have to#comics#chef recommended#good lord is that all the tags i need. am i done am i free. YEESH okay#anyways i think theyd be so silly together#no one utilises the comedic potential of erina living in the fantasy french revolution but i think deep learning ai sophia is a good match#either way theyre so good as parallels like hold on SPOILERS FOR BOTH GAMES PAST THIS POINT#both of them were created as last-ditch efforts. erina is the manifetation of toshiros suppressed hope and rebellion and sophia is ichinose#attempt at understanding the human heart and her own repressed emotions#theyre both constructs of the heart in one form or another even if they were created in v different ways
673 notes
·
View notes
Text
the "meanest" thing i'll say today is to please google that question before you send it to me. particularly if it is a general question. like "how do i install tray files?" you can google that. there are youtube videos that show you how to do things like that. even with making cc. there are so many tutorials out there. and it's not to be mean or to just deny you my labor, but encourage you to take advantage of the resources we have at our disposal. there are so many great creators who already have extensive tutorials out there and they're easy to find. please support them. do not take the luxury of being able to do independent research for granted. you need that skill.
#text post#PLEASE LEARN HOW TO RESEARCH THINGS ON YOUR OWN#Like I mean this earnestly! it's a skill you need esp NOW#find reputable sources#don't rely on ai generated answers#please PLEASE#i don't mind helping but i want ppl to really learn this skill#self sufficiency is so important
66 notes
·
View notes
Note
Do you use AI generators to compile or scrape this information?
no
#anonymous#ai has nothing on an over-caffeinated human being copy pasting & taking screenshots past 3 in the morning & queuing everything#no but on a serious note most of these is the product of years of compilations stuck in my drafts & old files as a student#been going through my old bookmarks as well (bc need more space) so there may be random study notes or tips sometimes#thats also why i have a lot of grammar related stuff that i used at school --- still handy notes though#as for the requests i usually do them in one sitting & queue them -- not claiming to be an expert on those topics#i just try to look for the best sources i can -- which is fun bc i learn a lot as well &#i always appreciate when people send me more info or corrections#this genuinely made me a bit self conscious of my posts tho like do they look AI generated#just shoved a lot of queued posts back to my drafts lol will try to edit them better soon i know its a mess here !#also acccidentally clicking the 'shuffle' queue messed up the chronology at one point -- so been trying to schedule posts#instead of adding to queue ---- but will reorganise when i find more time#but yeah most of these are my literal notes -- excerpts / literally copy pasted from my references that may be quite outdated#that i need to delete but still wanted to save elsewhere
64 notes
·
View notes
Text
The surprising truth about data-driven dictatorships

Here’s the “dictator’s dilemma”: they want to block their country’s frustrated elites from mobilizing against them, so they censor public communications; but they also want to know what their people truly believe, so they can head off simmering resentments before they boil over into regime-toppling revolutions.
These two strategies are in tension: the more you censor, the less you know about the true feelings of your citizens and the easier it will be to miss serious problems until they spill over into the streets (think: the fall of the Berlin Wall or Tunisia before the Arab Spring). Dictators try to square this circle with things like private opinion polling or petition systems, but these capture a small slice of the potentially destabiziling moods circulating in the body politic.
Enter AI: back in 2018, Yuval Harari proposed that AI would supercharge dictatorships by mining and summarizing the public mood — as captured on social media — allowing dictators to tack into serious discontent and diffuse it before it erupted into unequenchable wildfire:
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Harari wrote that “the desire to concentrate all information and power in one place may become [dictators] decisive advantage in the 21st century.” But other political scientists sharply disagreed. Last year, Henry Farrell, Jeremy Wallace and Abraham Newman published a thoroughgoing rebuttal to Harari in Foreign Affairs:
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making
They argued that — like everyone who gets excited about AI, only to have their hopes dashed — dictators seeking to use AI to understand the public mood would run into serious training data bias problems. After all, people living under dictatorships know that spouting off about their discontent and desire for change is a risky business, so they will self-censor on social media. That’s true even if a person isn’t afraid of retaliation: if you know that using certain words or phrases in a post will get it autoblocked by a censorbot, what’s the point of trying to use those words?
The phrase “Garbage In, Garbage Out” dates back to 1957. That’s how long we’ve known that a computer that operates on bad data will barf up bad conclusions. But this is a very inconvenient truth for AI weirdos: having given up on manually assembling training data based on careful human judgment with multiple review steps, the AI industry “pivoted” to mass ingestion of scraped data from the whole internet.
But adding more unreliable data to an unreliable dataset doesn’t improve its reliability. GIGO is the iron law of computing, and you can’t repeal it by shoveling more garbage into the top of the training funnel:
https://memex.craphound.com/2018/05/29/garbage-in-garbage-out-machine-learning-has-not-repealed-the-iron-law-of-computer-science/
When it comes to “AI” that’s used for decision support — that is, when an algorithm tells humans what to do and they do it — then you get something worse than Garbage In, Garbage Out — you get Garbage In, Garbage Out, Garbage Back In Again. That’s when the AI spits out something wrong, and then another AI sucks up that wrong conclusion and uses it to generate more conclusions.
To see this in action, consider the deeply flawed predictive policing systems that cities around the world rely on. These systems suck up crime data from the cops, then predict where crime is going to be, and send cops to those “hotspots” to do things like throw Black kids up against a wall and make them turn out their pockets, or pull over drivers and search their cars after pretending to have smelled cannabis.
The problem here is that “crime the police detected” isn’t the same as “crime.” You only find crime where you look for it. For example, there are far more incidents of domestic abuse reported in apartment buildings than in fully detached homes. That’s not because apartment dwellers are more likely to be wife-beaters: it’s because domestic abuse is most often reported by a neighbor who hears it through the walls.
So if your cops practice racially biased policing (I know, this is hard to imagine, but stay with me /s), then the crime they detect will already be a function of bias. If you only ever throw Black kids up against a wall and turn out their pockets, then every knife and dime-bag you find in someone’s pockets will come from some Black kid the cops decided to harass.
That’s life without AI. But now let’s throw in predictive policing: feed your “knives found in pockets” data to an algorithm and ask it to predict where there are more knives in pockets, and it will send you back to that Black neighborhood and tell you do throw even more Black kids up against a wall and search their pockets. The more you do this, the more knives you’ll find, and the more you’ll go back and do it again.
This is what Patrick Ball from the Human Rights Data Analysis Group calls “empiricism washing”: take a biased procedure and feed it to an algorithm, and then you get to go and do more biased procedures, and whenever anyone accuses you of bias, you can insist that you’re just following an empirical conclusion of a neutral algorithm, because “math can’t be racist.”
HRDAG has done excellent work on this, finding a natural experiment that makes the problem of GIGOGBI crystal clear. The National Survey On Drug Use and Health produces the gold standard snapshot of drug use in America. Kristian Lum and William Isaac took Oakland’s drug arrest data from 2010 and asked Predpol, a leading predictive policing product, to predict where Oakland’s 2011 drug use would take place.

[Image ID: (a) Number of drug arrests made by Oakland police department, 2010. (1) West Oakland, (2) International Boulevard. (b) Estimated number of drug users, based on 2011 National Survey on Drug Use and Health]
Then, they compared those predictions to the outcomes of the 2011 survey, which shows where actual drug use took place. The two maps couldn’t be more different:
https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
Predpol told cops to go and look for drug use in a predominantly Black, working class neighborhood. Meanwhile the NSDUH survey showed the actual drug use took place all over Oakland, with a higher concentration in the Berkeley-neighboring student neighborhood.
What’s even more vivid is what happens when you simulate running Predpol on the new arrest data that would be generated by cops following its recommendations. If the cops went to that Black neighborhood and found more drugs there and told Predpol about it, the recommendation gets stronger and more confident.
In other words, GIGOGBI is a system for concentrating bias. Even trace amounts of bias in the original training data get refined and magnified when they are output though a decision support system that directs humans to go an act on that output. Algorithms are to bias what centrifuges are to radioactive ore: a way to turn minute amounts of bias into pluripotent, indestructible toxic waste.
There’s a great name for an AI that’s trained on an AI’s output, courtesy of Jathan Sadowski: “Habsburg AI.”
And that brings me back to the Dictator’s Dilemma. If your citizens are self-censoring in order to avoid retaliation or algorithmic shadowbanning, then the AI you train on their posts in order to find out what they’re really thinking will steer you in the opposite direction, so you make bad policies that make people angrier and destabilize things more.
Or at least, that was Farrell(et al)’s theory. And for many years, that’s where the debate over AI and dictatorship has stalled: theory vs theory. But now, there’s some empirical data on this, thanks to the “The Digital Dictator’s Dilemma,” a new paper from UCSD PhD candidate Eddie Yang:
https://www.eddieyang.net/research/DDD.pdf
Yang figured out a way to test these dueling hypotheses. He got 10 million Chinese social media posts from the start of the pandemic, before companies like Weibo were required to censor certain pandemic-related posts as politically sensitive. Yang treats these posts as a robust snapshot of public opinion: because there was no censorship of pandemic-related chatter, Chinese users were free to post anything they wanted without having to self-censor for fear of retaliation or deletion.
Next, Yang acquired the censorship model used by a real Chinese social media company to decide which posts should be blocked. Using this, he was able to determine which of the posts in the original set would be censored today in China.
That means that Yang knows that the “real” sentiment in the Chinese social media snapshot is, and what Chinese authorities would believe it to be if Chinese users were self-censoring all the posts that would be flagged by censorware today.
From here, Yang was able to play with the knobs, and determine how “preference-falsification” (when users lie about their feelings) and self-censorship would give a dictatorship a misleading view of public sentiment. What he finds is that the more repressive a regime is — the more people are incentivized to falsify or censor their views — the worse the system gets at uncovering the true public mood.
What’s more, adding additional (bad) data to the system doesn’t fix this “missing data” problem. GIGO remains an iron law of computing in this context, too.
But it gets better (or worse, I guess): Yang models a “crisis” scenario in which users stop self-censoring and start articulating their true views (because they’ve run out of fucks to give). This is the most dangerous moment for a dictator, and depending on the dictatorship handles it, they either get another decade or rule, or they wake up with guillotines on their lawns.
But “crisis” is where AI performs the worst. Trained on the “status quo” data where users are continuously self-censoring and preference-falsifying, AI has no clue how to handle the unvarnished truth. Both its recommendations about what to censor and its summaries of public sentiment are the least accurate when crisis erupts.
But here’s an interesting wrinkle: Yang scraped a bunch of Chinese users’ posts from Twitter — which the Chinese government doesn’t get to censor (yet) or spy on (yet) — and fed them to the model. He hypothesized that when Chinese users post to American social media, they don’t self-censor or preference-falsify, so this data should help the model improve its accuracy.
He was right — the model got significantly better once it ingested data from Twitter than when it was working solely from Weibo posts. And Yang notes that dictatorships all over the world are widely understood to be scraping western/northern social media.
But even though Twitter data improved the model’s accuracy, it was still wildly inaccurate, compared to the same model trained on a full set of un-self-censored, un-falsified data. GIGO is not an option, it’s the law (of computing).
Writing about the study on Crooked Timber, Farrell notes that as the world fills up with “garbage and noise” (he invokes Philip K Dick’s delighted coinage “gubbish”), “approximately correct knowledge becomes the scarce and valuable resource.”
https://crookedtimber.org/2023/07/25/51610/
This “probably approximately correct knowledge” comes from humans, not LLMs or AI, and so “the social applications of machine learning in non-authoritarian societies are just as parasitic on these forms of human knowledge production as authoritarian governments.”
The Clarion Science Fiction and Fantasy Writers’ Workshop summer fundraiser is almost over! I am an alum, instructor and volunteer board member for this nonprofit workshop whose alums include Octavia Butler, Kim Stanley Robinson, Bruce Sterling, Nalo Hopkinson, Kameron Hurley, Nnedi Okorafor, Lucius Shepard, and Ted Chiang! Your donations will help us subsidize tuition for students, making Clarion — and sf/f — more accessible for all kinds of writers.
Libro.fm is the indie-bookstore-friendly, DRM-free audiobook alternative to Audible, the Amazon-owned monopolist that locks every book you buy to Amazon forever. When you buy a book on Libro, they share some of the purchase price with a local indie bookstore of your choosing (Libro is the best partner I have in selling my own DRM-free audiobooks!). As of today, Libro is even better, because it’s available in five new territories and currencies: Canada, the UK, the EU, Australia and New Zealand!
[Image ID: An altered image of the Nuremberg rally, with ranked lines of soldiers facing a towering figure in a many-ribboned soldier's coat. He wears a high-peaked cap with a microchip in place of insignia. His head has been replaced with the menacing red eye of HAL9000 from Stanley Kubrick's '2001: A Space Odyssey.' The sky behind him is filled with a 'code waterfall' from 'The Matrix.']
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
—
Raimond Spekking (modified) https://commons.wikimedia.org/wiki/File:Acer_Extensa_5220_-_Columbia_MB_06236-1N_-_Intel_Celeron_M_530_-_SLA2G_-_in_Socket_479-5029.jpg
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
—
Russian Airborne Troops (modified) https://commons.wikimedia.org/wiki/File:Vladislav_Achalov_at_the_Airborne_Troops_Day_in_Moscow_%E2%80%93_August_2,_2008.jpg
“Soldiers of Russia” Cultural Center (modified) https://commons.wikimedia.org/wiki/File:Col._Leonid_Khabarov_in_an_everyday_service_uniform.JPG
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
#pluralistic#habsburg ai#self censorship#henry farrell#digital dictatorships#machine learning#dictator's dilemma#eddie yang#preference falsification#political science#training bias#scholarship#spirals of delusion#algorithmic bias#ml#Fully automated data driven authoritarianism#authoritarianism#gigo#garbage in garbage out garbage back in#gigogbi#yuval noah harari#gubbish#pkd#philip k dick#phildickian
831 notes
·
View notes
Text
i have no excuse i just wanted to draw glitchrey lol glitchrey by @ginger-rat
#hlvrai#hlvrai fanart#hlvrai benry#hlvrai benrey#half life but the ai is self aware#half life vr but the ai is self aware#i spent too long on this wtf#this was meant to be a lagtrain reference but i couldnt get the arm right so i just scrapped it#THIS IS WHAT HAPPENS WHEN U DONT LEARN ANATOMY KIDS !
55 notes
·
View notes
Text
Third times the charm on posting this, Tumblr is fighting me so hard rn

Transcript:
Coomer: Gordon, I think you’re wonderful.✨
Gordon: Dr. Coomer, I’ve watched you steal three femurs from living men, I dont think I can ethically accept that compliment
#fanart#traditional art#corvidae art#hlvrai#hlvrai fanart#half life but the ai is self aware#hlvrai au#hlvrai tma au#hlvrai gordon#hlvrai coomer#freak❤️#hand writing so bad I can barely read it#this is what happens when you learn the normal alphabet at the same time as cursive
46 notes
·
View notes
Text

Self portrait for 2024! I was in a VERY bad mood that day 😅
#self portrait#cuterefaction#I was PISSED that day#found out a family member was selling AI generated colouring books#tried to have a polite conversation about the ethical concerns involved#he asked in response whether my spending 30 years learning to draw wasn't just as much 'stealing' as genAI#yes that was the word used#hoo boy am I proud of myself for NOT starting an apocalyptic fight at the christmas get-together
11 notes
·
View notes
Text
"Man, I want to learn to draw so I could draw all these obviously very funny ideas I have," she says, draws eyes for five minutes and then does nothing at all
#i have no concept of self-discipline and motivation#do you KNOW how many ideas i have? can you even imagine?#not that many actually. and you probably can#could i commission anyone to draw them for me? theoretically yes#but i don't have 1) money to spare; 2) artist friends who are into the same fandoms; 3) social skills to make such friends online or offlin#(also it's basically redrawing some meme-y pics. it's _embarrassing_ to ask)#actually it would probably take me less time to learn to draw than it would take me to gather courage to ask someone else to draw#no ai generated pics of course. obviously#my problem is that some things i envision are probably not on the beginner level#another one of my problems is that i have too much cringe in my blood to draw characters or write about them. even talk about them sometime#god did not let me be socialized properly because i would be too funny and powerful#schmartz.txt
15 notes
·
View notes
Text
correct me if i’m wrong: just learnt that there’s people putting AI generated fics on ao3 ITS FANFICTION?????????? YOU'RE NOT CHEATING FOR A GRADE?????????????????????????? AT LEAST HAVE THE DECENCY TO GENERATE AN ORIGINAL STORY AND TRY TO SELL IT FOR MONEY BECAUSE THAT WOULD ACTUALLY MAKE SENSE BECAUSE
YOU'RE USING AI. TO GENERATE. A FANFIC. YOU SAD LITTLE MUPPET HACK???????????????????????????????????????????????
go and gain some self-respect because you clearly do not have any if you’re using AI to generate a fic because where do you think the AI learnt how to write stories brother?????? BY STEALING ACTUAL AUTHORS WORK WITHOUT THEIR PERMISSION AND THE SAME GOES FOR AI GENERATED ART TOO
#dog i’m sorry i got#big feelings#about this shit#this is dumb#i mean it’s not even#bottom of the barrel#type shit#you’re scraping away at the wood of the barrel#made a hole through it#and have begun digging into the fucking earth#you troglodyte#ao3#does not need this#stop using ai#for exploitative purposes#ai is theft#ai is not art#ai is plagiarism#where do you think ai learned how to write stories?#BY STEALING FROM ACTUAL AUTHORS WITHOUT THEIR PERMISSION#you are#plagiarizing#it is#plagiarism#YOU ARE NOT A WRITER IF YOU SIMPLY ASK AI TO GENERATE A STORY FOR YOU#GO AND GAIN SOME SELF RESPECT BECAUSE YOU CLEARLY HAVE NONE#so mad rn#whooooo#ai art isn't real art#lord have mercy
3 notes
·
View notes
Text
Here is my contribution to the HLVRAI fandom, Benrey doodles (Ft myself and Gordon) because I can't srsly draw HLVRAI
#tuna art#hlvrai#half life but the ai is self aware#benrey#benrey hlvrai#gordon freeman#i physically can not draw these guys in a srs manner#i can't think of a srs drawing idea w/them#all i think is doodle doodle doodle#that's all i think man#i also need to learn how i wanna draw gordos#bc i dont like how i drew him#he looks funky
31 notes
·
View notes
Text
When I saw Angry Video Game Nerd in the thumbnail of hbomberguy's 4 hour long plagiarism video I fully believed he was going to complete Forzen's objective of dispelling the rumor that Irate Gamer Chris Boris ripped off Angry Video Game Nerd James Rolfe
#and that i was going to finally learn what the hell that meant#hlvrai#hlvrai forzen#half life vr but the ai is self aware
28 notes
·
View notes
Note
If you like ai chat (and happen to have an android phone or a laptop/desktop), check out SillyTavern. It’s the most advanced ai chat out there for roleplayers. It’s entirely self hosted too.
Awww, thanks! I've heard about SillyTavern, and I've heard that it's hella good indeed (though quite a pain in the arse to set up XD)
Rn I'd prefer to stay where I am. I don't chat with bots often, I just love writing them! ¦¬> Out of cai/xoul I RP and play ttrpgs with friends
So, look, guys! You can try this thingie yourself and share your experience!
—My lil fellas love a sprinkle of programming/IT in their language soup and I'm still thinking how to incorporate it into educational process as a side-quest/game of sorts
#ask#ai chatbot#language learning#discussion#character ai#cai ask#c.ai#chatbots#english learning#english#education#self education
3 notes
·
View notes
Text
HLVRAI fandom try not to kill an already small/dying fanbase challenge failed.
I've never written a fanfic in my life and here I am working on two (2)!!!!! frenrey ones. Trying to do my absolute best with writing all characters involved and I keep seeing posts just absolutely shitting on people who "mischaracterize" the characters in fanfics/fanart by doing "x, y, z, etc." with a lot of people agreeing and complaining about it and like...I don't even want to write anymore bc y'all just suck the fun and joy out of it.
Not to mention the amount of toxic posts about how if you "do/don't draw a HLVRAI character x, y, z way then it's wrong and blah blah blah". Got me fucked up and making me confirm that the decision to never post any of my art (bc I know it's not great but I'm trying my best just like everyone else here is!!!!), is the right way to go.
This fandom is barely alive as it is and y'all have the gall to go and shit on content creators for not doing it "correctly" or 100% accurate to the source videos??? Take a minute to think about how maybe the creator is new to writing/drawing for the fandom or that the way they do something is to make THEMSELVES happy and they want to share it with the rest of the fandom in hopes that someone else will like it too?
The bad lot of you never fucking learned the basic rule of "if you can't say something nice, don't say anything at all" or to idk...have some common decency?
Maybe just ignore or block something you don't like and go about your day like a normal adult?? Crazy concept I know but it works and you guys should try it sometime instead of hating on people who are literally providing free content and taking time out of their life to produce it bc its fun and makes them happy.
I'd say I'm sorry for the rant but I'm not. Some of you need a reality check about learning how to treat others even if you don't agree with the way they do something, or just need to learn to grow up and ignore shit you don't like. It's not hard at all and it'll make your life so much better I promise.
//
As for the good part of the HLVRAI fandom, I give you all a little smooch on ur forehead and tell u I'm proud of you all for doing what u guys do 💙
#hlvrai#half life vr but the ai is self aware#frenrey#idk what else to tag this as#just learn human decency ig idk#tired of seeing small fandoms get destroyed bc rude asses dont know how to act#we're literally a fandom about a tiny subsection of an old game like...chill tf out#go outside and eat a banana it'll make you feel better#rant
64 notes
·
View notes
Text
Uggggh I'm trying to draw but struggling (the strugglerrrr) so I'm gonna play video games instead but I was also talking to my partner the other day about how my OVW s/i would probably be really good friends with Genji!
I mean they both have omnic parts and neither really...got them by choice 😂😂 I think my s/i has been keeping hers a secret for a while because of ppl who are omnic racist and I feel like she'd be able to be a little more accepting of herself because of bonding with Genji!
#jane journals#self insert talk#🪨 i wanna rock with you 🪨#UGGHH I HATE WHEN I CAN'T DRAW#but ive learned that its better not to force it 😮💨😮💨😮💨#honestly my s/i would get along pretty well with almost all of them!#ESPECIALLY JUNKRAT MY FRIEND JUNKRAT#oh and if she was a playable character which i AM thinking about 👀👀#like potential abilities and stuff#her hero name would be Tandem!#cause of the...ai in her brain
10 notes
·
View notes