Tumgik
#and chatgpt
incorrectccrp · 8 months
Text
Ted: Okay. Explain quadratic equations to me like I'm 5. Pete: Alright, imagine you have a toy car, and you want to know how far it will go when you push it. A quadratic equation is like a special formula that helps you figure out how far the car will go based on how hard you push it. It's distance=a×(time)2+b×time+c, where a, b, and c are like secret numbers that help us calculate, and time is how long you push the car. Ted: Okay.... explain it like I'm 4. Pete: Okay, imagine you have a big slide, and you want to know how fast a ball will go down the slide. The quadratic equation is like a magical rule that helps us find out exactly how fast the ball will go. It's like having a superpower to know everything about the ball and the slide! So, with the quadratic equation, we can predict how speedy the ball will be when it goes down the slide. Ted: Like I'm 3. Pete: Imagine you have a bouncy ball. The quadratic equation is like a special way to know how high the ball will bounce when you throw it. It's like a bounce prediction formula. Yay for bouncing fun! Ted: Like I'm 2. Pete: Bouncy ball go up. We use special math to know how much up.
66 notes · View notes
animentality · 8 months
Text
Tumblr media
119K notes · View notes
memendoemori · 1 year
Text
Tumblr media
Good morning everybody
137K notes · View notes
softwaring · 9 months
Text
Tumblr media Tumblr media
this reply kills me 😭 article link
70K notes · View notes
Text
Tumblr media
27K notes · View notes
thehungrycity · 1 year
Text
Tumblr media
81K notes · View notes
Text
Happy September everybody, NaNoWriMo has decided to go "no YOU'RE the baddies" because no one likes their AI-ridden sponsor, lmao. In other news, multiple authors are suing OpenAI for copyright infringement.
8K notes · View notes
dduane · 1 year
Text
Well, this would be interesting...
29K notes · View notes
tulpafcker · 8 months
Text
if i woke up in taylor swift's body id tweet out "watching hetalia! it's SOOO funny" and turn my phone off
17K notes · View notes
purpleartrowboat · 1 year
Text
ai makes everything so boring. deepfakes will never be as funny as clipping together presidential speeches. ai covers will never be as funny as imitating the character. ai art will never be as good as art drawn by humans. ai chats will never be as good as roleplaying with other people. ai writing will never be as good as real authors
28K notes · View notes
animentality · 2 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
6K notes · View notes
anyway in case you don’t know it yet
Tumblr media
42K notes · View notes
lumsel · 2 years
Text
chinese room 2
So there’s this guy, right? He sits in a room by himself, with a computer and a keyboard full of Chinese characters. He doesn’t know Chinese, though, in fact he doesn’t even realise that Chinese is a language. He just thinks it’s a bunch of odd symbols. Anyway, the computer prints out a paragraph of Chinese, and he thinks, whoa, cool shapes. And then a message is displayed on the computer monitor: which character comes next?
This guy has no idea how the hell he’s meant to know that, so he just presses a random character on the keyboard. And then the computer goes BZZZT, wrong! The correct character was THIS one, and it flashes a character on the screen. And the guy thinks, augh, dammit! I hope I get it right next time. And sure enough, computer prints out another paragraph of Chinese, and then it asks the guy, what comes next?
He guesses again, and he gets it wrong again, and he goes augh again, and this carries on for a while. But eventually, he presses the button and it goes DING! You got it right this time! And he is so happy, you have no idea. This is the best day of his life. He is going to do everything in his power to make that machine go DING again. So he starts paying attention. He looks at the paragraph of Chinese printed out by the machine, and cross-compares it against all the other paragraphs he’s gotten. And, recall, this guy doesn’t even know that this is a language, it’s just a sequence of weird symbols to him. But it’s a sequence that forms patterns. He notices that if a particular symbol is displayed, then the next symbol is more likely to be this one. He notices some symbols are more common in general. Bit by bit, he starts to draw statistical inferences about the symbols, he analyses the printouts every way he can, he writes extensive notes to himself on how to recognise the patterns.
Over time, his guesses begin to get more and more accurate. He hears those lovely DING sounds that indicate his prediction was correct more and more often, and he manages to use that to condition his instincts better and better, picking up on cues consciously and subconsciously to get better and better at pressing the right button on the keyboard. Eventually, his accuracy is like 70% or something -- pretty damn good for a guy who doesn’t even know Chinese is a language.
* * *
One day, something odd happens.
He gets a printout, the machine asks what character comes next, and he presses a button on the keyboard and-- silence. No sound at all. Instead, the machine prints out the exact same sequence again, but with one small change. The character he input on the keyboard has been added to the end of the sequence.
Which character comes next?
This weirds the guy out, but he thinks, well. This is clearly a test of my prediction abilities. So I’m not going to treat this printout any differently to any other printout made by the machine -- shit, I’ll pretend that last printout I got? Never even happened. I’m just going to keep acting like this is a normal day on the job, and I’m going to predict the next symbol in this sequence as if it was one of the thousands of printouts I’ve seen before. And that’s what he does! He presses what symbol comes next, and then another printout comes out with that symbol added to the end, and then he presses what he thinks will be the next symbol in that sequence. And then, eventually, he thinks, “hm. I don’t think there’s any symbol after this one. I think this is the end of the sequence.” And so he presses the “END” button on his keyboard, and sits back, satisfied.
Unbeknownst to him, the sequence of characters he input wasn’t just some meaningless string of symbols. See, the printouts he was getting, they were all always grammatically correct Chinese. And that first printout he’d gotten that day in particular? It was a question: “How do I open a door.” The string of characters he had just input, what he had determined to be the most likely string of symbols to come next, formed a comprehensible response that read, “You turn the handle and push”.
* * *
One day you decide to visit this guy’s office. You’ve heard he’s learning Chinese, and for whatever reason you decide to test his progress. So you ask him, “Hey, which character means dog?”
He looks at you like you’ve got two heads. You may as well have asked him which of his shoes means “dog”, or which of the hairs on the back of his arm. There’s no connection in his mind at all between language and his little symbol prediction game, indeed, he thinks of it as an advanced form of mathematics rather than anything to do with linguistics. He hadn’t even conceived of the idea that what he was doing could be considered a kind of communication any more than algebra is. He says to you, “Buddy, they’re just funny symbols. No need to get all philosophical about it.”
Suddenly, another printout comes out of the machine. He stares at it, puzzles over it, but you can tell he doesn’t know what it says. You do, though. You’re fluent in the language. You can see that it says the words, “Do you actually speak Chinese, or are you just a guy in a room doing statistics and shit?”
The guy leans over to you, and says confidently, “I know it looks like a jumble of completely random characters. But it’s actually a very sophisticated mathematical sequence,” and then he presses a button on the keyboard. And another, and another, and another, and slowly but surely he composes a sequence of characters that, unbeknownst to him, reads “Yes, I know Chinese fluently! If I didn’t I would not be able to speak with you.”
That is how ChatGPT works.
48K notes · View notes
Text
Tumblr media
3K notes · View notes
river-taxbird · 28 days
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
4K notes · View notes
noosphe-re · 1 year
Text
"There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?' And someone else said, 'A poor choice of words in 1954'," he says. "And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the '50s, we might have avoided a lot of the confusion that we're having now." So if he had to invent a term, what would it be? His answer is instant: applied statistics. "It's genuinely amazing that...these sorts of things can be extracted from a statistical analysis of a large body of text," he says. But, in his view, that doesn't make the tools intelligent. Applied statistics is a far more precise descriptor, "but no one wants to use that term, because it's not as sexy".
'The machines we have now are not conscious', Lunch with the FT, Ted Chiang, by Madhumita Murgia, 3 June/4 June 2023
20K notes · View notes