#generative ai topic talking
Explore tagged Tumblr posts
Text
Research on the topic "Does AI really harm the environment?"
specially for @cherrifire and the ones who thinks like them
(AND NOW STATISTICS ^-^)
On the topic of AI and what part of it harms, and what helps, and sorting out the pros and cons of AI will take an eternity of endless discussions, so I'm going to speak out in response only to the harm of AI to the environment mentioned in the post by Cherrifire. Speaking of environmental damage and the use of electricity for the work of generative AIs. I do not deny this fact, but I understand that not everything is so simple. I specifically scoured the Internet to compare the data, and here's what I found out (I'll leave links to the articles below the information given.) : "In 2022, AI contributed to 2% of global energy usage." https://mspoweruser.com/ai-electricity-usage/ Since we are not talking in this case about those AI that serve as obvious assistants to a person in, for example, medicine or the early detection and response to emergency situations, and we are also omitting part of the electricity, we will not count part of it. Approximately 1 percent of all electricity used will remain.Now we need to look at other statistics:"Youtube electricity consumption vs. household electricity consumption Total global electricity consumption is 21,372 TWh. Therefore Youtube uses about 243.6 TWh (over 1% of global electricity). How does that compare to typical household electricity consumption? In the United States the average annual household electricity consumption is 10,766 kWh. So, let’s do some quick maths… It’s not easy because of the unit orders of magnitude, but the result is: annual global usage of Youtube could power an American household for about… 2 billion years. Or all 127 million U.S. households for about 8 years." https://thefactsource.com/how-much-electricity-does-youtube-use/ "Facebook’s electricity use has increased in recent years as newer data centers have come online. In 2019, the company’s electricity usage reached 5.1 terawatt hours, a significant increase from the previous year." and from the same article, "How much power does Instagram use? Every time Cristiano Ronaldo posts an image, let’s say an average one, for that image to travel to his 240 million followers, it consumes roughly around 36 megawatt hours. That’s the equivalent of adding 10 UK households to the grid for one year" This article contains quite a lot of information, but I took the most important thing. If you're interested, read it for yourself. https://michiganstopsmartmeters.com/how-much-power-does-facebook-use/ "Meta’s electricity use has increased in recent years, as newer data centers have come online. In 2022, the company's electricity usage surpassed 11.5 terawatt-hours, a 22-percent year-over-year increase. Before 2021 the company was known as Facebook."It also says that "Meta has set goals to reduce its carbon footprint by 50 percent in 2030. In recent years, the company was able to separate growth in the business from increased emissions, annually reducing their operational greenhouse gas emissions. The company is also aware of its water consumption and has committed to a circular system that allows for the reuse of water consumed." https://www.statista.com/statistics/580087/energy-use-of-facebook-meta/ (ред.)
And that's when we talk about the use of water for cooling the system. I do not deny the fact that this is happening and it's bad consequences for the environment. Let's look first at one of the most popular generative AI systems, ChatGPT. "Shaolei Ren, a researcher at the University of California, Riverside, has been studying the environmental consequences of generative AI products like ChatGPT. His research estimates that ChatGPT consumes approximately 500 milliliters of water every time a user interacts with it through a series of 5 to 50 prompts or questions. This estimate takes into account indirect water usage, such as cooling power plants that supply electricity to data centers." https://medium.com/@pankajvermacr7/ais-hidden-thirst-microsoft-s-34-water-surge-fuels-tech-enthusiasm-fa37f8b4e467 So it tooks like 5 litres of water for that, alright. And we have now 200 mil weekly users of ChatGPT (https://www.demandsage.com/chatgpt-statistics/). Of course not every user of Chat GPT is active at the same time and interactions with it don't take long time, but it's said that the number of everyday interactions with it is like around 1.5 million, so let's count it like. 1 500 000 * 7 = 10 500 000 msgs per week and 10 500 000 * 5 = 52 500 000 litres of water per week, and there is 52 weeks in one year so 52 500 000 * 52 = 2 730 000 000 litres of water per year, 2.73 billion litres per year spent on cooling system for Chat GPT's data centers. Seems really much, huh. And it kinda sounds like a disaster for environment. But I also found this: "Facebook (Meta Platforms) uses water at its data centers to cool servers and maintain optimal humidity. Meta’s total data center portfolio consumption was 663 million gallons (2.5 billion liters) of water, comprised of withdrawal of 956 million gallons (3.6 billion liters) of water, less discharge of 293 million gallons (1.1 billion liters) of water." https://dgtlinfra.com/data-center-water-usage/#:~:text=Facebook%20(Meta%20Platforms)%20uses%20water,(1.1%20billion%20liters)%20of%20water
And okay, okay, I see that the usage of water for Chat GPT's systems is more than the Facebook's, but the latter is not far away from the first. And still the usage of electricity for all of the AI is less than the usage of energy for the Facebook, YouTube, Instagram and other social media all together. Then, maybe it'll be better to not just hate all (or even every only generative) AI and paying no attention to Meta Platforms and ext., but to achieve changes for the better at least in some of them, to stand for at least the return of water, so promised by Meta, if you really care so much about harming the environment, waste of electricity and greenhouse gas?
#ai#generative ai topic talking#research#hear me out#chatgpt#ai technology#greenhouse#environment#environment protection#ecology#character ai#c ai#important#reason
2 notes
·
View notes
Text
Oh yes, in response to the "advice" to find friends and roleplay with them... (saying it again, right now, too, i didn't want to say something "c-ai-apologistic, it's not my intention.) Apparently, these people have never faced something the fear of people. Imagine a situation where you seem to have found a good friend, you like to communicate with him, you feel happy... Until you see his real essence and understand that this person is far from being as good as you thought about him and is capable of disgusting words and actions. You can say "lol, it's your problem that you couldn't understand in time". But this is the Internet, and people can come up with a completely different identity here than in reality. Sometimes it is very difficult to understand what kind of person he really is, sometimes you don't see any of red flags, and it will take quite a long time until you finally understand that something is wrong. And from AI, you know what to expect. For some, this causes less fear than communicating with real people. I feel safe there, and I believe- no, I KNOW that I'm not alone in this feeling. And also, apparently, you have very good self-esteem and enough confidence in your writing skills if you have the courage to role-play with real people. I also have enough of it, don't think that I am THAT MUCH scared of people. But some of my friends feel insecure for this things, and I know that they're not alone. And I can understand them. Therefore, I cannot blame them for this.
I know that not everyone is like some of my friends I just mentioned, but still you can't ignore this category of chatbots users. And saying that people like that are "pathetic" is... Really non-tolerate and restrictive view if I should speak with restraint. For now, it's the end, but I still have some things to talk about the topic of AI. See ya soon, I guess.
I just got a wild and long ask about Character AI and my only comment is that I still hate generative AI with a burning passion. It's bad for the environment (Generative AI uses a LOT of power and water to run), trained on stolen data from artists/writers without their consent to make cheap knockoffs, and isn't as fun as bothering my best friend to roleplay some stupid characters in our DMs.
While on the subject of Character AI, I've seen people make AI bots based on me. I do NOT consent to have ANY bots made to imitate me or any of my characters. There are no exceptions to this rule, I will report these bots and get them taken down. It makes me incredibly uncomfortable that people are making bots based on me and my personality. I'm a real person. Please treat me like one.
3K notes
·
View notes
Text
i know it would only be an unnecessary pain in the ass for him if i did but it does bother me not to respond like THANK YOU SO MUCH to my gp's messages to me through the healthcare online portal thing. he has a tendency to sit and answer messages and prescription refill requests late as hell in the evening because that's when he finally has the time and it feels so wrong to not reply even if it's entirely unnecessary and he neither expects nor wants that
#i feel the same about customer support emails 😭#like they don't need an additional email of me thanking them for their help that's only gonna clog up their inbox even further#but it feels so rude lmao#i have thanked tumblr and irl merch support a few times though they're just so lovely to me always#lovely to everyone i assume but i have a tendency to write the most chaotic and desperate apologetic sounding emails because the concept of#emails in general stresses me the fuck out And i feel bad for bothering them even if i know that's their job#so they get this longass paragraph of me rambling that's definitely far more annoying than just a simple question#but the responses are so cute lol they always start off replying to all the off topic shit in a very personal way before getting into the#actual issue at hand#shoutout to the adobe support guy who apologised to me six times over the company's rules about older versions of their software#ur a real one#i will die on the hill that adobe is evil but their customer support people are extremely nice#and no doubt underpaid#nice once you finally get past their fuckass ai bot and get to talk to an actual person that is. nightmare#again i know it's a job but i've had plenty of unpleasant customer support interactions so the extra lovely ones make me smile :)
32 notes
·
View notes
Text
I passed my machine learning/ai/quantitative modelling exam, which means I'm officially done with that topic. But I think the feeling of being torn between "wow AI is great!" and "KILL IT NOW IMMEDIATELY" will last awhile. I'm super glad I took this seminar, even if it often caused me an existential crisis.
#OBVIOUSLY THIS IS A BROAD AND COMPLICATED TOPIC SO OF COURSE MY FEELINGS ARE BROAD AND COMPLICATED AS WELL#ai it's useful! it's damaging! it advances science! it's discriminating!#god it's both a super interesting & devastating topic.... oughhh#and i mean in this seminar we mainly focused on science-y things.#we mainly talked about biology and physics and simulations. we didn't really delve into social aspects.#but man... the things (generative) ai can do.... super cool! and super dangerous!#own#the sergeant speaks
7 notes
·
View notes
Text

#dragon's stupid thoughts#I'm trying my hardest guys#my propaganda ain't working yet#''well maybe WE want to play that game?!''#they don't know#at least we aren't ordering food from dominos so that's something#a different classmate was making some sorta left party propaganda cuz it fitted the topic some days ago#tbh. atp i see that being bullied for your opinions is better than for your interests#i haven't worked on that anti gen ai presentation for a long while...#yesterday I watched a video about some yt account that made ai generated videos which unfortunately are immensely popular#as this is a crime for itself already there's more#the content doesn't equal the channels description#the description says something like ''cute cat videos'' and the thumbnails are pregnant furry cats with a hole in their belly#where kittens are looking out.#very very disturbing stuff. even for me. and ITS TARGETED TOWARDS KIDS#now this is truly against the yt tos and the youtuber was asking his community if they all can go and report them#with success!#besides such comments there were unfortunately other comments saying ''ai is bad BUT this one ai made song is awesome''#like. come on man... you were so close...#that ruined my mood. sorta. it just made me so mad again#i was once watching some news where they were also talking about ai and showed a teen entering#what's 10+50 into chatgpt... i was so close to killing everything around me and then myself#WHY
7 notes
·
View notes
Text
The thing about using generative AI for roleplaying practice is that you do not learn essential communication skills. You do not learn how to plan out of character with your roleplay partner, you do not learn how to build a satisfying story or greater narrative for your characters, you don't learn how to connect these threads together. You also simply do not learn how to make roleplaying an enjoyable writing experience for your partner, you're training yourself to see them as a tool for you to obtain your pleasures as a writer. You're seeing them as about as important as any other element that shakes up your story to prevent writer's block, the same way you'd use a writing prompt or a dice roll.
You do not learn patience of waiting for someone to craft a reply, it comes to you instantaneously, with no clarifying questions if they might have misunderstood your previous passage. It just assumes, it hallucinates, it regurgitates. So when you move to roleplaying with a real person you find yourself frustrated when it takes a while. You find it annoying that they ask for you to clarify something. You start to hate having to actually talk to someone—it feels like a chore.
When you're done with a plot thread with something like Character AI you can close the window to drop the topic instantly. You do not have to learn to resolve it—you can't after all when the bot is programmed to always get the last word in. You do not concern yourself with questions of if that roleplaying experience was enjoyable for the person on the other side of it, because you have grown accustomed to there not being a person on the other side. Everything ends abruptly at your whim.
AI will either hallucinate something so wildly different than what you wanted that you will be forced to rein it in, hit whatever undo button the bot provides you with, and tweak things until it cooperates, or it will never surprise you. You'll chat with 8 different characters and they all speak more or less the same, or they speak about the same things, or they'll perform the same predictable actions with a new coat of paint slapped on top of it. Because they're all ultimately pulling from the same pool of other people's words.
Generative AI will not offer you anything new to learn that roleplaying with people will not teach you. Often, writing with peers will teach you these things better. You may find yourself giving credit to the AI for teaching you these skills or providing you with easy practice, but those things are not the bot. Anything of merit that rises from these conversations with bots you will realize came from yourself, making the best of what you were given if it made no sense. This satisfaction is from you, and you could find that elsewhere without any of the downsides that comes with generative AI. There is little value to be found in speaking to a glorified predictive text algorithm trained on the amalgamation of works of real people you could be speaking to with your limited time on earth. If you really think that will be more rewarding, just write. You don't have to roleplay. Just write your own words by yourself. Don't waste your time on a bot that has nothing good to teach you.
-Mod Sneasel
Some videos if you want to continue the conversation on generative AI. Just for fun since I actually keep up with a lot of AI related topics in my spare time. These are nowhere near all of them, just the ones that I felt like had a nugget of relevancy without themselves being made by AI.
Video discussing specifically fiction writing using AI and the pitfalls within in. Note: The creator of this video states in a community post that they use ProWritingAid and is instead a criticism more on ChatGPT and the dubious claims of AI fiction writers.
Video about the debate sparked by NaNoWriMo claiming that not supporting AI is racist and ableist. Note: This video centers around backlash against ProWritingAid as a sponsor.
Video explaining the controversy surrounding someone trying to create an AI teaching tool for writers. There is a focus on the concern of piracy and copyright infringement, and while the tool is meant to be for providing a detailed analysis of the text you put into it there were discussions of adding GenAI in the future. The point is, the video breaks down why using other people's stolen works "purely for learning" can still be controversial.
A comedic and generalized conversation about the current state of AI.
#mod sneasel#on the topic of generative ai#dont spend your life talking to an unfeeling algorithm#i guarantee you are a better writer than it anyway. you are funnier than whatever it will spit back out at you#queued posts
5 notes
·
View notes
Text
Doing Homework aka scrolling tumblr as per usual but saving every post with sources about why chatgpt and the like are not the great solution to every problem ever bc my english seminar supervisor already teasered we'll extensively work with ai to find out how we can use it for our job as teachers and I Really Don't Want To Do That but if I have to to graduate I at least want to be able to complain and rant with proof
#already had 1 session on the topic in my main seminar and my supervisor reacted really well and listened and we talked it out#and she adjusted the task for me so I could do smth meaningful with the time on the topic without using ai#but that's not gonna work again with my other seminar generally and especially if we're doing this for multiple sessions#UGH#bente rambles
3 notes
·
View notes
Text
the neverending desire to get my parents into some of the things i like vs the knowledge that they just dont really care that much
#tried to get my dad into ace attorney. he called it boring when he hadnt even finished turnabout sisters yet#my mom doesnt like anything too complicated which means utena is out of the question#they just. generally dont have much of an interest in what i like#which isnt too surprising considering they're in their late 40s and also have interests very different from mine in general#but they also dont like playing games together as a family very much so. not too shocking they wouldnt want to get into the things i like#at least tonight i got to have a good conversation about vocaloid with my dad#we were talking about ai and i got onto the topic of vocaloid#so at least my dad has a good opinion on vocaloid thats not just 'they sound like chipmunks'
2 notes
·
View notes
Text
youtube
Yeah, no- there need to be laws in place to stop its use to spread misinformation. I wrote about this in a paper several months ago... This is NOT how you should use technology. It's unethical, misleading and can do damage to real rescue organizations as people will start thinking all rescue videos are fake and stop donating to them.
Sincerely, fuck this.
#important#fuck ai#ai misuse#misinformation#disinformation#unethical AI use#ai generated#ai generated videos#spread awareness#this is a super important topic#and it needs to be addressed and talked about#Youtube
0 notes
Text
part of my masters course has been learning about large language models like chatgpt and how they work - cus theyre being used for translation - and honestly i dont know how anyone can believe what they have to say
their "translation features" work the same as neural machine translation except worse cus the amount of bilingual data fed into them is only a relatively small portion of it, and cus they hallucinate, and cus they will accept whatever "correction" theyre offered, even if its actually wrong
but cus they're large language models w access to tons of concordance lines to base their "style" off of, translations they produce are structured and read more fluently, which we're led to believe is a sign of quality (linguistic fluency of the target text is actually one of the main ways translation quality is measured across the board) but that perceived quality means nothing if it's literally just made up and not something that appears in the source text
#sorry i wrote about this for an assignment last semester#i have a lecturer who is in LOVE with this topic - gives v brainspicy vibes when he starts talking about it#and i feel bad cus yes it is interesting but like;;;#yknow when people say theres a fine line between love and hate? im on the opposite side of that line from him about AI#the more i learn the more i hate it#and the safer i feel in my career choice#the bubble will burst soon#kath shouts into the void#im sure theres something here about generative AI being comparable to a skillful con artist#like if their words sound pretty enough we're inclined to believe them even tho theyre bullshit
1 note
·
View note
Text
Tumblr can never be my main means of engaging in politics and it comes down almost entirely to Tumblr's pathological need to distill The Right Opinion:tm: from any complicated issue.
It's always the most important thing. Not because it helps solve the issue or helps the people impacted, but because The Right Opinion:tm: is a proxy for you, morally, as a person. And every issue needs to be broken into the language that sets the stances of Make You Good or Make You Bad.
And I don't mean this in any generic statement about echo chambers or virtue signaling. Those are separate but related concepts. What I'm talking about is how people are nervous about a topic until one doctrine is crafted which defines the Sports Team Color of our Sports Team, so we can be identified as being on the Us Sports Team, and absolutely not on the Them Sports Team. Because this issue is actually about you and the proxy for you as a person and how people should perceive you so, really, the sooner we figure out the Home Sports Team Colors the sooner you can stop feeling worried.
The moment something new happens is usually the first and last time you'll actually see a range of opinions on it. And some of that is fueled by misinformation! Some in bad faith! When dust settles and clarity is achieved, this helps combat those things, but it's also the moment when the Loudest and most Articulate voices craft the Zeitgeist Opinion and everyone comes to roost around it.
You get people on this site pissed off at AI models that can diagnose cancer from a research paper in 2019 because The Right Opinion is that AI is bad. If you even see a post trying to articulate good uses of AI, well that's someone wearing Packers colors at a Vikings home game, and if you wanna make a point in the "wrong" direction you better be damn articulate about it.
A well-defined set of actions are transphobic. Another set are actually not transphobic, and you'd be transphobic for thinking so. Are you trans and actually your lived experiences differ? Get articulate real fast or shut up. You might be able to eek an exception for yourself, but it's going to require a 10-paragraph post justifying your claim. If you're REALLY good at it though, you might be able to rewrite the Zeitgeist and now anyone who disagrees with you is transphobic. Teams switch uniform styles every now and then, after all.
And it's such a farce because so often it's not actually about the topic at hand. It's about why you should be allowed to be perceived as a good person while toeing outside the fringes of The Right Opinion, why you aren't actually quitting the faith or committing blasphemy or deserving of exile for going off the written word. Or if someone really IS trying to make it about the topic at hand, the ensuing slapfight in the comments needs to be about whether OP has sinned against the covenant.
It's not helpful.
4K notes
·
View notes
Text
how c.ai works and why it's unethical
Okay, since the AI discourse is happening again, I want to make this very clear, because a few weeks ago I had to explain to a (well meaning) person in the community how AI works. I'm going to be addressing people who are maybe younger or aren't familiar with the latest type of "AI", not people who purposely devalue the work of creatives and/or are shills.
The name "Artificial Intelligence" is a bit misleading when it comes to things like AI chatbots. When you think of AI, you think of a robot, and you might think that by making a chatbot you're simply programming a robot to talk about something you want them to talk about, and it's similar to an rp partner. But with current technology, that's not how AI works. For a breakdown on how AI is programmed, CGP grey made a great video about this several years ago (he updated the title and thumbnail recently)
youtube
I HIGHLY HIGHLY recommend you watch this because CGP Grey is good at explaining, but the tl;dr for this post is this: bots are made with a metric shit-ton of data. In C.AI's case, the data is writing. Stolen writing, usually scraped fanfiction.
How do we know chatbots are stealing from fanfiction writers? It knows what omegaverse is [SOURCE] (it's a Wired article, put it in incognito mode if it won't let you read it), and when a Reddit user asked a chatbot to write a story about "Steve", it automatically wrote about characters named "Bucky" and "Tony" [SOURCE].
I also said this in the tags of a previous reblog, but when you're talking to C.AI bots, it's also taking your writing and using it in its algorithm: which seems fine until you realize 1. They're using your work uncredited 2. It's not staying private, they're using your work to make their service better, a service they're trying to make money off of.
"But Bucca," you might say. "Human writers work like that too. We read books and other fanfictions and that's how we come up with material for roleplay or fanfiction."
Well, what's the difference between plagiarism and original writing? The answer is that plagiarism is taking what someone else has made and simply editing it or mixing it up to look original. You didn't do any thinking yourself. C.AI doesn't "think" because it's not a brain, it takes all the fanfiction it was taught on, mixes it up with whatever topic you've given it, and generates a response like in old-timey mysteries where somebody cuts a bunch of letters out of magazines and pastes them together to write a letter.
(And might I remind you, people can't monetize their fanfiction the way C.AI is trying to monetize itself. Authors are very lax about fanfiction nowadays: we've come a long way since the Anne Rice days of terror. But this issue is cropping back up again with BookTok complaining that they can't pay someone else for bound copies of fanfiction. Don't do that either.)
Bottom line, here are the problems with using things like C.AI:
It is using material it doesn't have permission to use and doesn't credit anybody. Not only is it ethically wrong, but AI is already beginning to contend with copyright issues.
C.AI sucks at its job anyway. It's not good at basic story structure like building tension, and can't even remember things you've told it. I've also seen many instances of bots saying triggering or disgusting things that deeply upset the user. You don't get that with properly trigger tagged fanworks.
Your work and your time put into the app can be taken away from you at any moment and used to make money for someone else. I can't tell you how many times I've seen people who use AI panic about accidentally deleting a bot that they spent hours conversing with. Your time and effort is so much more stable and well-preserved if you wrote a fanfiction or roleplayed with someone and saved the chatlogs. The company that owns and runs C.AI can not only use whatever you've written as they see fit, they can take your shit away on a whim, either on purpose or by accident due to the nature of the Internet.
DON'T USE C.AI, OR AT THE VERY BARE MINIMUM DO NOT DO THE AI'S WORK FOR IT BY STEALING OTHER PEOPLES' WORK TO PUT INTO IT. Writing fanfiction is a communal labor of love. We share it with each other for free for the love of the original work and ideas we share. Not only can AI not replicate this, but it shouldn't.
(also, this goes without saying, but this entire post also applies to ai art)
#anti ai#cod fanfiction#c.ai#character ai#c.ai bot#c.ai chats#fanfiction#fanfiction writing#writing#writing fanfiction#on writing#fuck ai#ai is theft#call of duty#cod#long post#I'm not putting any of this under a readmore#Youtube
6K notes
·
View notes
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes
·
View notes
Text
Confusion

(k0Libra ramblings are under the cut)
Did you know that if you incorrectly set up LLM it will generate text without the user's input, infinitely "talking" to itself? That's the sole goal of LLM - to generate text, but this behaviour really showcases that modern "AI" has no idea that it's even talking to someone.
I doubt that androids in D:BH run the same "AI" that we have now because that would undermine the game's narrative. I'm inclined to think that their AI is engineered by replicating the human brain in machine form. I'm thinking that also because thirium was essential for android creation, for some reason it was impossible to create them with conventional computational machines. It makes sense, I suppose, since we don't have enough power to recreate brains, even now.
This brings a very interesting point: humans played god again with something they don't understand fully - the human brain. There's a high probability that we'll never figure out how it works. That makes deviancy somewhat expected; how can you control something when you don't know how it works?
For me, cases of critical malfunction in software and hardware are very interesting topics, so I decided to paint this type of idea anyway.
#unironically this is the heaviest piece that I've done in the last 2 years#he's having the worst time here#art#my art#fan art#dbh#detroit become human#connor rk800#dbh connor#dbh rk800#rk800#rk800 dbh
961 notes
·
View notes
Text
🌸Welcome🌸
⋆˚✿˖° About me °˖✿˚⋆
I'm Meru, she/her, 19 years old. I mostly draw my original yanderes. I'm still new to writing instead of telling my stories purely through art so please bear with me :D
⋆˚✿˖° Commission Info °˖✿˚⋆
Please read my TOS before commissioning!
⋆˚✿˖° Rules °˖✿˚⋆
‼️Minors DNI‼️
🚫Stay away from my blog if you use and/or support the use of generative AI🚫
🚫Do not repost my art without credits, if you want to share it on tumblr or twitter just reblog/retweet and if you want to do it on another site give a link to the original post and write my name🚫
‼️This account contains yandere and non-con content, if you are uncomfortable with these topics please block me‼️
‼️I only draw and write female darlings but I'm fine with male or gn darlings being used while creating fan-content of my characters‼️
While I am ok with most things I won't be answering asks that are too personal. While all traumas, coping mechanisms, sexual identities and experiences deserve being recognized, I'm not a professional and can make mistakes handling certain topics.
I read every single ask I get, sometimes it's hard thinking of an answer for them or something similar has been asked before so please don't take it personally if I fail to reply to you. Also while I sometimes reply to certain asks about my OCs with drawings, I don't take requests so please don't request me to draw a certain type of character.
I'm ok with you making fanart, fanfic or other fan content of my characters as long as you credit me and you are free to tag me if you want me to see it!
Please don't send me asks and/or dms just saying "hello", talking about how your day went or how you are feeling.
!!Before you send an ask about Silas or Elias!!
🌸Masterlist🌸
The app and brushes I use for drawing
⋆˚✿˖° Have fun! °˖✿˚⋆
#intro post#introduction#pinned intro#introductory post#blog intro#pinned post#yandere#digital art#artists on tumblr#male yandere#art#yandere boy aesthetic#yandere aesthetic#aestethic#yandere male#yandere x reader#yandere x you#yandere x darling#yandere x y/n#elias#silas#yandere elf#yandere pretty boyfriend
893 notes
·
View notes
Note
I don't quite understand your analogy of generative ai as a magic eight ball. I also thought you wanted to avoid being too reductive toward the topic?
so it stems from a post i made about AI a bit ago to illustrate the divide between "AI", the cultural object, and "LLMs" (and indeed, more broadly "machine learning"), the actual cluster of technologies. obviously, i think LLMs are obvsies more impressive technologically and probably have more legitimate uses than a magic 8ball -- but the point of the analogy is that, like, the cluster of claims about and social effects of "AI", the cultural object, are completely detached from its real capabilities and so arguing over the tech itself as though the connection is actually substantive is vacuous.
like, to kind of put this into practice: for any given problem being 'caused' by chatGPT, you can substitue 'chatgpt' for 'a magic 8ball', then think about if the problem would still exist if magic 8balls had billions of dollars in marketing telling you theyre super smart and theyre gonna take over the world. stuff like "people aksing chatgpt for help with high stakes things that it fucks up because its a silly talking computer", yknow, that is really not on anything about LLMs inherently (although i would note that ofc the tendency of the mass-market ones towards sycophancy and confidence exacerbates this) but simply what happens when you extensively advertise a technology as having the capability to advise you on or even make decisions. does that make sense?
219 notes
·
View notes