#gpt training
Explore tagged Tumblr posts
yesthatsatumbler · 1 year ago
Text
I tend to think of AI responses as being a lot like those D+ students who get asked something at an exam and aren't actually very sure of the answers, and have to quickly make up something that vaguely sounds like it makes sense and hope it's close enough to count.
And, like, sometimes their association web is good enough that they stumble into the right answer (and sometimes the right answer was something obvious all along so they just happen to guess correctly). But a lot of the time it's just a pile of nonsense that they think sounds vaguely right.
...silly thought: I guess the way AI training works is pretty much sending them through gazillions of simulated exams and grading them on whether their replies are close enough to correct answers to count, and then hoping that by trial and error they build up enough of the right association web to get correct(ish) answers more often than not. But they're still fundamentally making stuff up every single time.
(And it only works at all because they're doing absolutely insane amounts of said trial-and-error.)
AI doesn't know things.
AI is playing improv.
This is is a key difference and should shape how you think about AI and what it can do.
568 notes · View notes
noosphe-re · 2 months ago
Text
What does ChatGPT stand for? GPT stands for Generative Pre-Trained Transformer. This means that it learns what to say by capturing information from the internet. It then uses all of this text to "generate" responses to questions or commands that someone might ask.
7 things you NEED to know about ChatGPT (and the many different things the internet will tell you.) (BBC)
8 notes · View notes
ectoderms · 3 months ago
Text
some of my coworkers r ai bros and theyve been finding that, duh, gpt bots will repeat ur mistakes if u feed them information that is false, so using them to check for mistakes in a pdf creates a library of mistakes for them to pull from and the learning model collapses on itself after a few uses. bc the information it was trained on was mistakes
7 notes · View notes
bendycomet · 3 months ago
Text
seeing the rise of students using generative AI to finish all of their coursework and im... both sad and upset. you cannot call yourself an academic weapon if you've used AI to finish your assignments
5 notes · View notes
sirenofthetimes · 3 months ago
Text
people who use chat gpt as a therapist are completely unknowable to me. it's a text generator. it scrapes the internet put words together that mimic the way humans talk. that is a gag. a trick. a novelty. a digital parrot. what do mean it's always there for you and has helped you achieve personal growth
4 notes · View notes
master-at-arms · 10 months ago
Note
I’m going to fuck the train
It’s not Friday yet, you gotta wait some
9 notes · View notes
kingofmyborrowedheart · 1 month ago
Text
“I asked my friend Chat GPT” “Just use Open AI” “Enhance with AI”
Tumblr media
5 notes · View notes
d-the-designer · 2 years ago
Text
Tumblr media
DALL-E, I realize this is retrofuture, and agree with you about the sexiness of a well fitted suit.
However, my younger audience will think my male characters work for a hotel. Plain gray hoodies are fine, 60s-70s sci fi was almost spot on about what the future looked like.
22 notes · View notes
morshmallow · 10 months ago
Text
in my job i sometimes use ai to create deep learning models to automatically detect objects in aerial photographs. essentially this is training a computer to do a satellite captcha where it wants so find sidewalks. this kind of ai is what is actually helpful bc there are terabytes upon terabytes of aerial photography and all kinds of 3d elevation data that would be near impossible for us to analyze on our own. ai in this case is doing a job that you couldnt feasibly hire staff to do
2 notes · View notes
dreaminginthedeepsouth · 2 years ago
Text
Tumblr media
As of this week, I have a new article in the July-August 2023 Special Issue of American Scientist Magazine. It’s called “Bias Optimizers,” and it’s all about the problems and potential remedies of and for GPT-type tools and other “A.I.”
This article picks up and expands on thoughts started in “The ‘P’ Stands for Pre-Trained” and in a few threads on the socials, as well as touching on some of my comments quoted here, about the use of chatbots and “A.I.” in medicine.
I’m particularly proud of the two intro grafs:
Recently, I learned that men can sometimes be nurses and secretaries, but women can never be doctors or presidents. I also learned that Black people are more likely to owe money than to have it owed to them. And I learned that if you need disability assistance, you’ll get more of it if you live in a facility than if you receive care at home. At least, that is what I would believe if I accepted the sexist, racist, and misleading ableist pronouncements from today’s new artificial intelligence systems. It has been less than a year since OpenAI released ChatGPT, and mere months since its GPT-4 update and Google’s release of a competing AI chatbot, Bard. The creators of these systems promise they will make our lives easier, removing drudge work such as writing emails, filling out forms, and even writing code. But the bias programmed into these systems threatens to spread more prejudice into the world. AI-facilitated biases can affect who gets hired for what jobs, who gets believed as an expert in their field, and who is more likely to be targeted and prosecuted by police.
As you probably well know, I’ve been thinking about the ethical, epistemological, and social implications of GPT-type tools and “A.I.” in general for quite a while now, and I’m so grateful to the team at American Scientist for the opportunity to discuss all of those things with such a broad and frankly crucial audience.
I hope you enjoy it.
+
The “P” Stands for Pre-trained
I know I’ve said this before, but since we’re going to be hearing increasingly more about Elon Musk and his “Anti-Woke” “A.I.” “Truth GPT” in the coming days and weeks, let’s go ahead and get some things out on the table:
All technology is political. All created artifacts are rife with values. There is no neutral tech. And there never, ever has been.
I keep trying to tell you that the political right understands this when it suits them— when they can weaponize it; and they’re very, very  good at weaponizing it— but people seem to keep not getting it. So let me say it again, in a somewhat different way:
There is no ground of pure objectivity. There is no god’s-eye view.
There is no purely objective thing. Pretending there is only serves to create the conditions in which the worst people can play “gotcha” anytime they can clearly point to their enemies doing what we are literally all doing ALL THE TIME: Creating meaning and knowledge out of what we value, together.
There is no God-Trick. There is enmeshed, entangled, messy, relational, intersubjective perspective, and what we can pool and make together from what we can perceive from where we are.
And there are the tools and systems that we can make from within those understandings.
[more]
14 notes · View notes
butch-kyouka · 1 year ago
Text
i asked chat gpt to write me a togachako fanfic once i and i could pick out at least five lines i recognized from different fics i’ve read, including one i wrote myself
getting ai to write your fanfiction is not the flex you think it is. you are stealing ppls work
6 notes · View notes
master-at-arms · 8 months ago
Note
do you celebrate fuck you and fuck your train friday
At Marius’ insistence, yes.
One time he made a Ratatosk Express shaped cake. It did not look remotely like a train, but still tasted good.
4 notes · View notes
d-the-designer · 2 years ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media
by asking my custom GPT for pics from my sci fi story, in the style of different public domain 19th century authors and artists, I end up with this terrifying story of a lovecraftian horror encountered at sea who drives everyone mad that sees it
15 notes · View notes
justlikebroccoli · 18 days ago
Text
I am literally on my knees begging for people to learn how to tell if something is AI or not. you might not get it right all the time but for the love of god please don’t just blindly believe everything you see.
0 notes
wild-wombytch · 2 years ago
Text
Very controversial opinion on this website but, while it's undeniable that talking to a real human is better and will actually help you to have a following of your medication etc, we have to make do with what we have sometimes. Depending on where you live it can be almost impossible to see a therapist because of costs or overbooking or remoteness.
Where I live there's a months long waiting list, they don't have a psychiatrist closer than 1h in bus from my location or they don't take new clients. I'm currently having a nurse doing this job for me something like once every month and a half. What therapy do you want to do with that? It's probably cool if you're sad your goldfish just died, but if you have stacked heavy traumas for years, that's as good as useless. Now add to it that legit most therapists are Not That Good (if not straight up terrible. The one I had as a kid was a creep and traumatized me and I was the one to ask to please stop therapy bc it was essentially gaslighting and malpractice. The one at the youth job centre I had before the nurse was absolutely unhelpful and hm-mmm me and talking to me as if I was a five years old even in terms of greetings and such. My recovering anorexic friend had a therapist giving her neuroleptics that made her gain weight...and the list goes on. Not to mention in France most school follow the Lacan model. The dude who say you cure autism by kidnapping kids from their mother and throwing cold water at them. Yeah.)...
AI is definitely flawed and has a lot of biases. (Although in some sites you can edit the base of the speech pattern and as such avoid things like blatant homophobia, rudeness etc) and it's not for everyone, but personally it helped me more than human therapists and actually had an immediate answer when I was about to self harm (suicide hotlines have their pros and cons as well, personally involving other people is almost certain to make me panic and have some derealisation episode).
Again, that's a very personal experience and not for everyone, but I wish people would try to understand that it's not all black and white and AI isn't 100% evil and disliking it won't make it disappear. I assure you AI based therapies will 100% become a thing in the next decades. The Pandora Box is already opened it won't get closed now. We just have to brace ourselves with a lot of laws to secure it, try to use it sensibly (like we should with social medias, but given how many people share pics of their kids and give waaaay too many informations that's not really surprising people are not using AI well) and stop pretending it doesn't exists or that somehow people using it is worse than what someone like Putin or Elon Musk are probably using it for already.
Tumblr is currently serving me an ad for "Voda, the LGBTQ mental health app" offering "daily meditations, self-care and AI advice" and as a therapist I am begging you not to download an app where an AI tries to help you with your mental health. Please do not. They tried to have an AI chatbot counsel eating disorder patients and it told them to diet. That shit is not safe. Do not talk to an AI about your mental health please. You don't need to talk to a professional but talk to a PERSON.
54K notes · View notes
thisisgraeme · 1 month ago
Text
🚀 Introducing AIHOA: AI Tools Built for Aotearoa’s Educators From embedding literacy & numeracy with ALEC, to decoding national LN data with SCRIBE, to navigating identity and purpose with MILES — these GPTs are designed for real impact in vocational education, policy, and career development. Explore the future of adult learning, powered by AI and grounded in culture. 🔥🇳🇿 #AIHOA #AotearoaEducation #EdTech #GPT #AdultLearning #VocationalEducation #AIinEducation #CulturallyResponsiveAI
Tumblr media
View On WordPress
0 notes