#ChatGPT is not a search engine and not a source
Explore tagged Tumblr posts
Text
Okay so instead of checking a search engine, he asked an AI that literally could have told him anything, like to buy stamps at the grocery store which would not have worked.
And instead of finding out post offices exist and asking around there, he just... bought 20 stamps and then instead of saving them, because most USA stamps you can buy are "forever stamps" meaning whatever you pay for them now the stamp should be valid to send the same sort of parcel even 10 years from now, he just... threw them away?
And then instead of just... dropping the letter in the post box or, again, taking it to the post office, or just... PUTTING IT IN HIS OFFICE'S OUTGOING MAIL BOX... the guy had to have it handled by someone else.
And then either he mistyped "disadvantage" as "advantage" in that last bit or he really thinks not knowing how to send paper mail is a flex.
Like, I get it. I didn't send much paper mail as a kid, and when I did, my parents handled things for me, so when I started sending letters and packages, I had to brush up on how to send snail mail even though it was taught to me in school, because there's lots of steps and it's not intuitive.
There's no shame in not knowing how to send mail, no matter how young or old you are. But there IS shame in defaulting to an AI instead of another human being. If you know what Chat GPT is, you should also know what a search engine is. Please start with a search engine or a human being. An AI is literally stupider than you are and if it gives you a correct answer, that's by the grace of God and not because Chat GPT knows what it's doing.
I'm young, so I'm the default tech support in my house. It's annoying. I get mad at my dad for not being able to look up simple solutions to his own problems, like how to set which software opens an MP4 by default on his Windows 11 Laptop. I'm annoyed that he thinks all my knowledge of Windows 10 should give me perfect answers when I've only been operating Windows 11 since January of this year and I don't do the same stuff on my laptop as he does.
But you can bet your ASS that as angry as I am that he cannot look at google first and THEN come to me if he still can't solve it, I am SO FUCKING GRATEFUL that in lieu of a search engine, I'm his go-to for tech problems and NOT FUCKING CHATGPT.
I get it. It's embarrassing not to know how to solve what seem like simple problems by yourself, especially when you work in a fast-paced professional environment.
But my ass with my ADHD and inability to ask for help STILL has the wherewithal to go "I don't know this answer. But I know what the answer ought to look like and I have an idea of what steps I'll need to take to find out." I know which search results are going to have a high amount of accuracy when answering my question, I know which results are ads or irrelevant, I know how to tell when software is malware or at the very least sketchy and I know, approximately, which sites are legitimate and which aren't, even without my extensions telling me. And I understand these are all skills. But it baffles me that we've come so far from the strategies of "how to find things out" going from the very short list of "Find a reputable search engine and read through the results to find an answer you think you can trust and start from there, and if that fails, try a library or another human being." to "Ask the idiot liar box that people use to generate goth anime girl tiddies. It doesn't know how to count how many Rs are in the word Strawberry, but you should definitely trust the idiot liar box to tell you how to fix a leak or change a tire or how to send paper mail."
AI is not magic and it's definitely not intelligence. It is a massive network of computers that has access to human knowledge, yes, but also the entire rest of the internet, including the lies, the probably lies, the unvetted claims, the trolls, the fake science, the hearsay, the rumors. It's one thing to think to yourself "I don't know how to change a tire, I should ask the mechanic if he'll teach me." and entirely another thing to think to yourself "I don't know how to change a tire. I should ask my 4-year-old niece, this book from the 1500s, and that one dentist in a different country if they know how. Surely among all three of these sources someone will have an answer.
If you wouldn't ask a child, you probably shouldn't ask ChatGPT. It's like shooting at an encyclopedia with a buckshot rifle. Yeah, there's a chance your spray pattern will assemble into something like the correct answer. But it's just as likely that you're going to get nothing but junk.
It's really something just how impressively stupid all these self styled "entrepreneurs" are.
#Sincerely hope that guy is trolling#ChatGPT is not a search engine and not a source#Tip: I am so fucking mad
28K notes
·
View notes
Text
today in work with bri:
- had a customer who said she wanted to cancel her order because she asked chatgpt if we were a scam and i just stared at it for a minute
#ended up including a polite ‘i would recommend taking anything presented by chatgpt with a grain of salt as it is not a legitimate search-#engine & generative ai tends to scramble many potentially conflicting sources of text together.’#bri.txt#ghe thing abt my company too is like… i’m not going to say it’s cheap but it does spell out all the fees associated you just have to like.#Read. you’re given options to opt out of every extra service possible and ppl just do not read this and then opt in and get mad at us.#and call us a scam bc They made an oopsie.#(we also get called a scam bc we are not their SoS. our client agreement specifically states us as third party. Please read before spending#hundreds of dollars on something.)
3 notes
·
View notes
Text
heartbreaking: artist you follow is bizarrely pro-ai
#hey look something original#i might have to unfollow them bc that stance pisses me off#like even if you dont care about it being used in art#kids relying on chatgpt/using it as a search engine is a real issue#it straight up makes up sources!#it doesnt have actual knowledge its GENERATIVE#it just says stuff that algorithmically sounds good
1 note
·
View note
Text
Honestly I'm so used to resorting to it as a tool to wade through search results when I'm looking for very specific information that I forgot that people use it to actually write stuff.
We are heading towards an absolute knowledge black hole, aren't we?
#its supposed to be a search engine tool and time saver#the fact that people use it to bs creative projects and write whole papers for them is absolutely wild#idk maybe im part of the problem but with how bloated search results for everything are these days#its a lot easier when looking for studies on the frequency of species vicariance vs species dispersal to just#look at the sources chatGPT cites in its dumb little summary to see if they're actually relevant and useful#than to scroll through pages and pages of stuff that mention it in one or two sentences#using that thing like a less reliable wikipedia#aka looking for the bits of information that I actually want and then going to the cited source to actually get relevant info
84K notes
·
View notes
Text
Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead. Premium chatbots provided more confidently incorrect answers than their free counterparts. Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences. Generative search tools fabricated links and cited syndicated and copied versions of articles. Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses. Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested.
6 March 2025
123 notes
·
View notes
Note
Hey, uh, I was scrolling your blog and… you know ChatGPT isn’t even remotely a search engine, right? It just strings together random words in a way that looks like it makes sense and matches the theme of your prompt? If ChatGPT provides sources, if you check the sources will either be fake or not actually relevant to what it’s saying
I had some doubts but then it became very clear that chatgpt is a fraud when I asked it how tall is odysseus and it said something along the lines of "his height is not mentioned in the odyssey but he must be 6foot"
No chatgpt... ody is 5 foot smth. He is TINY. HE IS OUR BELOVED SHORT KING. Here's an actual fact that was easy to find on internet

Lesson learned : never trust AI blindly without asking for sources and double checking the info you get
#i think ody paid chatgpt to spread this propaganda#tall ody propaganda#short king#epic the musical#epic#the odyssey#the iliad#odysseus#he's so silly#i will meme about this scene its just so funny#that “no longer mislead by appearances” was SAVAGE#bro's saying ody looks dumb and short and pathetic until he starts speaking 😭😭💀#funny
152 notes
·
View notes
Text
Taking your face gently into my hands
Sweetie pie
Kisses your forehead tenderly
Sugar pickle
Caresses your cheeks
Stop treating chatgpt as a search engine.
Seriously, just stop. It's not a search engine. It's an AI that more often than not makes up answers. Tells you the wrong definition of a word. Claims another doesn't exist. Gives you the wrong meaning for names or locations or history. Gives incorrect, and even dangerous, advice regarding health or proper care for humans, plants, or animals.
It's not reliable in any way, shape, or form.
Please, I am begging you on my knees, stop blindly believing anything AI tells you. Learn to research on your own. It's a valuable skill! Stop using Google--it's hard to get past the AI and sponsored results these days--and try Firefox or DuckDuckGo. Those don't give you bullshit, they just give you results.
Please please please stop elevating AIs to this level of trustworthiness that is absolutely not deserved. They are not helpful, they are not accurate, they are stupid programs that can mislead you which can be harmlessly embarrassing, or dangerously toxic.
Researching is becoming a lost skill these days, and if you don't know how to do it properly, ASK. I am the research queen in my house, and used to write articles for a living using any tiny tidbit I found online. (From verified sources, of course.)
Stop giving AI more power than it already has. It's not helping you, it's filling your head with inaccurate information, which is really hard to unlearn down the line. And it's only going to get worse the more people lean into it and only believe what it tells them.
It's not smart just because it's a computer. Stop giving it so much credit.
69 notes
·
View notes
Text
If you're still defending ChatGPT please condsider the following
I feel like the angle of “fuck ChatGPT” that isn’t given enough attention is the traffic it draws away from the websites it’s ripping information from.
Now Google does this already. If you google anything now, Google’s “Search Labs AI Overview” will give you what you’re looking for summarized without having to scroll and click on various links.
Google has been up to some shady shit for years redirecting traffic. It used to just be a search engine—you used it to find the non-Google-owned websites that you needed. Now, though, Google is doing everything it can to keep you on Google.
If you search up a product, Google fills half the page with ad-placement links, image search results, the AI overview, Google reviews, and other noise so you stay with them.
Less traffic to the websites it’s ripping information from means those websites don’t get any profit for the information they’re providing, through ad revenue and such.
You might not care whether XYZ website makes money off you, but hosting and maintaining a website costs money, and if there’s no return on investment because search engines are gobbling it up, it’s that much harder for these businesses to stay afloat and not either bury their info under ads, or stick it behind a paywall.
Enter ChatGPT.
Remember when your teacher slandered Wikipedia to kingdom come? “Don’t use Wikipedia! It’s not a reliable source of information!”
Wikipedia, at the very least, demands sources for all of its information, and scroll to the bottom of any article to find exactly where the people who wrote it are getting their evidence.
Google might show you the most relevant link, one out of the thousands of possible search results. It’s foggy here so I googled a question about fog, and beneath the AI overview was a link to NOAA, for example.
ChatGPT goes one step farther. ChatGPT does not fact-check. All it does is answer your questions, but where it’s getting those answers and how correct those answers are, are pulled from the Internet. And if the Internet’s most popular results are wrong, ChatGPT isn’t going to care, it’s just going to regurgitate what those results are saying.
You still have to check your sources, but the difference is, now you don’t know who those sources are, or how biased their information is.
If I want to know the morality of declawing my cats, and I google “should I declaw my cats” I might get two very different answers. The ASPCA will tell you it’s harmful to the cats and animal abuse. “SuburbanBoomerCatMoms.net” (hypothetical) might tell you you gotta do it for the sake of your furniture and that outcry from the other side is just leftwing snowflake libtards.
Wikipedia will cite both of those sources (actually Wikipedia won’t give the second one any credence at all but hypothetically speaking). Google will show you the URL before you click it to both of those sources. ChatGPT will steal from both of those sources, not tell you it’s doing it, and present it as if both carry equal authority on animal care. Do you see the issue here?
Saying all of this because I know somebody who’s putting their medical information into ChatGPT to “translate” it for them.
Anyone who googles any symptom finds out pretty quick that “you have cancer” is a very popular and overblown answer. At this point I have to brush them off because arguing with them won’t change their mind.
So even if you don’t care about information diversity, sustained by giving those smaller websites traffic and revenue, you should care about the smoke and mirrors being very quickly normalized around hiding where information is coming from and making it that much harder to sort fact from fiction.
The Robot cannot be an unbiased source of information, because it is trained on the biases of the information it is scalping. It is your job, your responsibility, and your right to know and understand the ulterior motives behind what any one source is telling you.
58 notes
·
View notes
Text
"EVERYTHING IS NORMAL" "THEY'RE █████████ THE INTERNET!"

Sometimes you'll notice changes being quietly made to your favorite internet services. Be it a video platform, your search engine of choice, your favorite social network, or wherever you go to buy the things you need or want. Sometimes you'll also see changes in advertisements that were, suspiciously, only relevant to your own unique situation.
You know it in your gut that something definitely changed, but it was only worth mentioning in conversation. "This changed for me, did it happen to you too?" Some of these changes are experienced by everyone all at once, but others are limited to specific groups, and sometimes are rolled out in staggered waves, meaning only some people are affected at different points in time. By the time the change is fully implemented - when every person targeted for this change is affected - it doesn't even matter anymore. The companies making these changes could report them publicly if they wanted to, and all people could do in response is be annoyed by it but eventually accept it and move on. The idea of "boiling the frog" comes to mind.
Our services have been getting worse in some ways, better in others, but there's undoubtedly some changes that are bad for everyone but the companies supplying these internet services (and sometimes, secretly, the governments of various countries around the world).
For me, personally, I've noticed changes to Meta (Facebook), to Google (and its services, Google Maps and YouTube), to ChatGPT, to Twitter - oh sorry, to "X", and many more. These changes are relatively small and are mostly unnoticeable... but I noticed them, just like all the other little changes they've quietly rolled out over the years. However, these changes feel a bit more insidious.
With Meta (Facebook for me), it was that they started suppressing accounts that frequently posted political content. This became most obvious during and after the 2024 election.
With Google, it was how it seems to bury certain content that's relevant to your given search, such as proof - one way or another - that something was happening with our politicians that's valuable knowledge to the public, but apparently isn't relevant enough to be on the very first page (or is simply hidden away entirely). This isn't even mentioning that Google modified its maps service so The Gulf of Mexico now reads The Gulf of America...
With YouTube, it's how it prioritizes click-bait, rage-bait, heavily-one-sided discussions of political topics, rather than pushing the very proof (or at least the very best evidence) that paints the clearest picture these overblown discussions are about. It's clear they're prioritizing watch time and engagement instead of truth.
With ChatGPT, I knew they had to control their generative text AI behind-the-scenes for certain situations (naturally you don't want your service to be generating stuff like "kill yourself," hate speech, lies, etc...), but recently it seemed to change its sources when looking up news online, to the point that it now paints a favorable image of Trump and his people.
And Twitter... sorry, with X... well, I shouldn't even need to explain this one, but I will try. The richest man in the world bought Twitter, changed how some of the back-end works, dramatically changed which voices were suppressed and which ones were heard, allowing hate speech and misinformation to spread freely on the platform, even promoting misinformation directly by retweeting it... there's a lot to it, but just know that Twitter used to be less shitty than it is now. Now it's really bad.
The point I'm making is that a lot of these changes happened around or soon after the 2024 election, and the people controlling these companies showed up to Trumps inauguration. On top of their million dollar donations to Trump, they're also doing work on his behalf to mask what awful things him and his people are doing while simultaneously promoting the things that make them look good. In short, information is becoming less accessible.
All of this, of course, is ignoring what Trump and his people have done to our government-provided websites and services, like removing the constitution and more from whitehouse.gov, how they're scrubbing decades of data from the CDC, etc...
The worst part about all this is I don't know if I could even prove anything anymore. These changes have made it difficult to know what services can be trusted going forward.
These are terrifying times. If the censorship was bad before, it's so much worse now.


Although I'd usually go out and protest with these signs, I've decided not to do it with these ones. I'd practically be an actor or an NPC, repeating the same visual joke over and over. These are my first signs I won't protest with. At least, for now.
Nonetheless, don't forget to fly your flags upside-down, boys and girls and non-binary types. Stay safe, and fuck Trump & Co!

#trump#maga#fuck maga#trump administration#elon musk#art#artwork#protest#america#fuck trump#fuck elon#fuck elon musk#artists of tumblr#traditional art#usa#philosophy#debate#morality#story#resistance#us politics#elongated muskrat#lgbtqia#lgbt#lgbtq#american politics#seek truth
47 notes
·
View notes
Text
How to use search engines effectively in the year of our lord 2025
So, we all know Google sucks now. There are some other alternative search engines, but honestly, switching search engines isn't going to fix a lot of the fundamental issues we're seeing with search engines nowadays. But yesterday, while responding to someone on reddit who was making the argument of "Google sucks now, so really, how much worse is it to just ask ChatGPT" I realized that there is actually a formula for using search engines that I have that continues to work perfectly for most things despite the fact that Google sucks now, so I thought I would share.
First of all, to remove all of the AI bullshit from Google, you can use udm14.com instead, or install the udm=14 browser extension. The method I outline here may or may not work with search engines other than Google, I haven't looked into them deeply enough. udm14.com should be essentially just Google, but without AI.
Then, we have to go back to the beginning and understand what a search engine actually is, and what it isn't. I spent three years of my career working in the guts of a search engine (not Google, or any other web-based search engine), so I should hope I would know what they are:
A search engine is a tool to locate documents.
Google in particular has done a lot to obscure what a search engine actually is by adding a lot of "cool" "features" to their search engine which are not actually within the scope of search engine capabilities. When you search for a question and Google displays a bolded answer that it found on a web page? Not search engine provenance. When it displays its "AI Summary"? Not search engine provenance. When it advertises things to you? Not search engine provenance. When it comes up with questions that "other people asked"? Not search engine provenance. The core competency of a search engine is to find documents (in this case, web pages) from a large collection of documents (the internet) based on their relevance to a query you have typed. Just like people are misusing ChatGPT to do stuff it was not designed for and that it is not good at, using a search engine as if it is a question answering service that can deliver the answer to a question you asked is using the search engine to do something it was not designed for and is not good at.
The search engine is not an all-in-one tool any more than ChatGPT is an all-in-one tool. Research is a multi-step process that involves a search engine, but the search engine cannot do everything for you. Here is the process:
Learn how to identify reliable sources of information. Learn what sites tend to have reliable information about the topic you're looking up. Wikipedia is a good fallback that may give you links to other reliable sources. You can also ask people who know more about your topic for recommendations of good sites. There are also sites that rank the reliability and bias of other popular sites. The search engine's ability to find relevant documents is not super useful when the internet is full of untrustworthy bullshit and is becoming more so as time goes on due to AI-generated content. Just because a search engine returns a link does not mean it is reliable.
Use a search engine to specifically search just the websites you know are reliable for your topic. Google has some documentation about how to do this on their search engine here. There should be a way to do this on any other half-decent search engine, as well, but I don't have the details of how to do it. Now you have limited your scope from "anything and everything produced by everyone who has ever created a Wordpress account plus whoever paid Google to have their site appear in every single search" to a collection of documents that you can trust.
Read the sources that you get back from the search engine. No, seriously. Read them. Don't read Google's "AI Summary". Read the actual sources. Don't read the bolded answer Google put at the top of the results list. Read the sources. Don't ask another AI to summarize the sources for you. Read the sources. Don't just read the headline or title and assume you now know everything that is in the body of the article. READ THE SOURCES. There is no shortcut for this, you have to read.
There was a time when you could get away with being lax about this and just do general searches, but that was because there was an actual limit on the amount of wrong information that mere humans could generate per unit time, and also because Google did legitimately use to be more concerned with promoting reliable sources than with promoting whoever paid them the most money to do so. But that time is over.
Basically, if you wouldn't just type your question into ChatGPT and hope for the best, don't just type your question into Google and hope for the best, either.
20 notes
·
View notes
Text
Rant about generative AI in education and in general under the cut because I'm worried and frustrated and I needed to write it out in a small essay:
So, context: I am a teacher in Belgium, Flanders. I am now teaching English (as a second language), but have also taught history and Dutch (as a native language). All in secondary education, ages 12-16.
More and more I see educational experts endorse ai being used in education and of course the most used tools are the free, generative ones. Today, one of the colleagues responsible for the IT of my school went to an educational lecture where they once again vouched for the use of ai.
Now their keyword is that it should always be used in a responsible manner, but the issue is... can it be?
1. Environmentally speaking, ai has been a nightmare. Not only does it have an alarming impact on emission levels, but also on the toxic waste that's left behind. Not to mention the scarcity of GPUs caused by the surge of ai in the past few years. Even sources that would vouch for ai have raised concerns about the impact it has on our collective health. sources: here, here and here
2. Then there's the issue with what the tools are trained on and this in multiple ways:
Many of the free tools that the public uses is trained on content available across the internet. However, it is at this point common knowledge (I'd hope) that most creators of the original content (writers, artists, other creative content creators, researchers, etc.) were never asked for permission and so it has all been stolen. Many social media platforms will often allow ai training on them without explicitly telling the user-base or will push it as the default setting and make it difficult for their user-base to opt out. Deviantart, for example, lost much of its reputation when it implemented such a policy. It had to backtrack in 2022 afterwards because of the overwhelming backlash. The problem is then that since the content has been ripped from their context and no longer made by a human, many governments therefore can no longer see it as copyrighted. Which, yes, luckily also means that ai users are legally often not allowed to pass off ai as 'their own creation'. Sources: here, here
Then there's the working of generative ai in general. As said before, it simply rips words or image parts from their original, nuanced context and then mesh it together without the user being able to accurately trace back where the info is coming from. A tool like ChatGPT is not a search engine, yet many people use it that way without realising it is not the same thing at all. More on the working of generative ai in detail. Because of how it works, it means there is always a chance for things to be biased and/or inaccurate. If a tool has been trained on social media sources (which ChatGPT for example is) then its responses can easily be skewed to the demographic it's been observing. Bias is an issue is most sources when doing research, but if you have the original source you also have the context of the source. Ai makes it that the original context is no longer clear to the user and so bias can be overlooked and go unnoticed much easier. Source: here
3. Something my colleague mentioned they said in the lecture is that ai tools can be used to help the learning of the students.
Let me start off by saying that I can understand why there is an appeal to ai when you do not know much about the issues I have already mentioned. I am very aware it is probably too late to fully stop the wave of ai tools being published.
There are certain uses to types of ai that can indeed help with accessibility. Such as text-to-voice or the other way around for people with disabilities (let's hope the voice was ethically begotten).
But many of the other uses mentioned in the lecture I have concerns with. They are to do with recognising learning, studying and wellbeing patterns of students. Not only do I not think it is really possible to data-fy the complexity of each and every single student you would have as they are still actively developing as a young person, this also poses privacy risks in case the data is ever compromised. Not to mention that ai is often still faulty and, as it is not a person, will often still make mistakes when faced with how unpredictable a human brain can be. We do not all follow predictable patterns.
The lecture stated that ai tools could help with neurodivergency 'issues'. Obviously I do not speak for others and this next part is purely personal opinion, but I do think it important to nuance this: as someone with auDHD, no ai-tool has been able to help me with my executive dysfunction in the long-term. At first, there is the novelty of the app or tool and I am very motivated. They are often in the form of over-elaborate to-do lists with scheduled alarms. And then the issue arises: the ai tries to train itself on my presented routine... except I don't have one. There is no routine to train itself on, because that is my very problem I am struggling with. Very quickly it always becomes clear that the ai doesn't understand this the way a human mind would. A professionally trained in psychology/therapy human mind. And all I was ever left with was the feeling of even more frustration.
In my opinion, what would help way more than any ai tool would be the funding of mental health care and making it that going to a therapist or psychiatrist or coach is covered by health care the way I only have to pay 5 euros to my doctor while my health care provider pays the rest. (In Belgium) This would make mental health care much more accessible and would have a greater impact than faulty ai tools.
4. It was also said that ai could help students with creative assignments and preparing for spoken interactions both in their native language as well as in the learning of a new one.
I wholeheartedly disagree. Creativity in its essence is about the person creating something from their own mind and putting the effort in to translate those ideas into their medium of choice. Stick figures on lined course paper are more creative than letting a tool like Midjourney generate an image based on stolen content. How are we teaching students to be creative when we allow them to not put a thought in what they want to say and let an ai do it for them?
And since many of these tools are also faulty and biased in their content, how could they accurately replace conversations with real people? Ai cannot fully understand the complexities of language and all the nuances of the contexts around it. Body language, word choice, tone, volume, regional differences, etc.
And as a language teacher, I can truly say there is nothing more frustrating than wanting to assess the writing level of my students, giving them a writing assignment where they need to express their opinion and write it in two tiny paragraphs... and getting an ai response back. Before anyone comes to me saying that my students may simply be very good at English. Indeed, but my current students are not. They are precious, but their English skills are very flawed. It is very easy to see when they wrote it or ChatGPT. It is not only frustrating to not being able to trust part of your students' honesty and knowing they learned nothing from the assignment cause you can't give any feedback; it is almost offensive that they think I wouldn't notice it.
5. Apparently, it was mentioned in the lecture that in schools where ai is banned currently, students are fearful that their jobs would be taken away by ai and that in schools where ai was allowed that students had much more positive interactions with technology.
First off, I was not able to see the source and data that this statement was based on. However, I personally cannot shake the feeling there's a data bias in there. Of course students will feel more positively towards ai if they're not told about all the concerns around it.
Secondly, the fact that in the lecture it was (reportedly) framed that being scared your job would disappear because of ai, was untrue is... infuriating. Because it already is becoming a reality. Let's not forget what partially caused the SAG-AFTRA strike in 2023. Corporations see an easy (read: cheap) way to get marketable content by using ai at the cost of the creative professionals. Unregulated ai use by businesses causing the loss of jobs for real-life humans, is very much a threat. Dismissing this is basically lying to young students.
6. My conclusion:
I am frustrated. It's clamoured that we, as teachers, should educate more about ai and it's responsible use. However, at the same time the many concerns and issues around most of the accessible ai tools are swept under the rug and not actively talked about.
I find the constant surging rise of generative ai everywhere very concerning and I can only hope that more people will start seeing it too.
Thank you for reading.
49 notes
·
View notes
Text
I just want y'all to keep in mind of the two planets that have rulership under Aquarius. Saturn and Uranus.
If we're gonna see themes of anything extra under this major cycle (or certain under/over tones of certain possible events), watch for what signs Saturn and Uranus are gonna be in.
I have a keen feeling about the Saturn Uranus conjunction in late Gemini early Cancer come 2032. My guesses?
AI gets regulated and used for Internet Search purposes. Think of OpenAI/ChatGPT being used sorta like Google now. But a lot more credible and the people behind it do a background check for credible sources each time a new website comes online. Lotta scrutiny and a stern educationally buffed eye. It'll come built into a browser like a search engine (the first thing you see type deal, not built into the side like you'd see with OperaGX)
Home libraries will be recognized as public libraries possibly? As people return to or begin to collect more analog products, I feel like those with an abundance who wish to share it can create a local library within a neighborhood. You know the community centers that some neighborhoods have? Yeah, implement a neighborhood library perhaps. (So you have the playground, the pool, the HOA office, and a library (bonuses if it also has a little cafe truck next to it. IMO there's gonna be a lot of BYOC/T stuff happening).) (BYOC/T = Bring Your Own Coffee/Tea)
Homebuyers revolution = I don't know why I think of the Saturn-Uranus conjunction in that manner. Will things get much cheaper or much easier then?
Yeah nah these are channeled things to come from those planets.
Also doesn't help that my Saturn return happens around that time (or a bit after).
Pray for me when 2032 happens.
(edit: while I'm not the exact first poster on this subject of Uranus in Cancer, I'm one of the first FEW, WOHOO)
Like and repost maybe? 💀
#astrology observations#astro notes#pluto in aquarius#astro community#astrology#astro observations#uranus in cancer
17 notes
·
View notes
Text
"all I'll say" lol I'm full of shit y'all know I never shut up. I do actually want to talk about how I use chatGPT, I just don't wanna field tiresome arguments about souls and the inherent virtue of toil or whatever.
And no, I'm not using it to write for me. I'm with everyone on the "if no one cared enough to write it then I don't care to read it" thing. But if you understand its limitations and know how to prompt it, it really is an incredible tool for research.
Most recently I had it collate information so I could create history-based currency systems in the setting of Tamriel. With only books and search engines at my disposal, this might have taken me WEEKS. A keyword based search will bring up anything containing a phrase, e.g. "bullion currency," and those sources might not actually answer my questions.
Plus, when I ran into an economic concept I didn't understand, I could ask it to define the terms and then rephrase it if I wasn't grasping the definition.
And lastly I can make it provide sources. I use these for further reading (again saving hours of searching and avoiding dead ends) and also to verify the info it gives me. Yes, ChatGPT is sometimes wrong, but that was also always true of the shit you found on Google.
I did all this over the course of a day so I could find some historical grounds for Tanis doing stupid sleazy arbitrage schemes in Skyrim. I'm having fun! Chatgpt is fucking fun!! Just try it you won't go to hell because hell isn't real!!!!
7 notes
·
View notes
Text
Reminder to people to Stop Using AI. Full stop. Cease. AI generated images, text, sound, it's all built off of stolen data and is environmentally destructive due to the power needed for it. The amount of material scraped off the internet, the massive amounts of data needed for LLMs to work, it's all built off other people's work. The data scraping doesn't know the difference between copyrighted and public domain, nor between illegal abuse material and regular images. Real people have to manually go through the data collected to remove abuse material collected into the datasets, and not only are they paid horribly for their work but are often traumatized by what they see. That fancy ai art? Built off data that includes abuse material, CSA material, copyrighted material, material not meant to be used in this fashion. It's vile.
You don't need to use ai images, you Should Not Use them at all. Choosing ai over real artists shows you are cheap and scummy.
I would rather deal with something drawn by a real person that is 'poorly drawn', I would rather have stick figures scribbled on a napkin, I would rather have stream of consciousness text to speech from a real person than ai slop.
I see a lot of people using ai images in their posts to spread fundraiser requests. Stop that. You don't need ai art to sell your message better, in fact the ai images come off as cheap and scammy and inauthentic.
Chatgpt is not a reliable source of information. It is not a search engine or a translator, so not use it as such. Don't use it at all. Period. Full stop. Stop feeding the machine. Stop willingly feeding your writing, art, images, voice, or anything else into their datasets.
AI bullshit is being shoved in my face at every turn and I'm tired of seeing it.
18 notes
·
View notes
Text
Now I understand not liking ChatGPT or Grok or Peebo or Wuwu or whatever generative AI system is the outrage of the week due to their factual unreliability and propensity to spread easily rebutted misinformation and whatnot, but...why are we lumping in character.ai with those...? Like, I don't know any people who are using a Bakugo Katsuki roleplay bot as a search engine or a news source. And if they are that is admittedly extremely funny. I don't know, it's just a little weird for me to see something that's clearly meant to be used as entertainment lumped in with Misinformation Machines. Are we lumping Cleverbot in with them, as well?
#Epitaph#Also honestly I do not think telling ChatGPT “Write me fanfiction about Patrick Bateman getting his period” is some grand moral sin.#It can obviously be used for some pretty heinous things but can we please just be real for a minute.#If you use AI as a search engine then that's really just...not good. But why are we up in arms about using AI for stupid fun?#This isn't an “all or nothing” situation. Grow up.
8 notes
·
View notes
Text
I still don't get why people are asking chatgpt, or whatever ai bullshit they use, questions when internet search engines are right there. Like I realize internet search is not as reliable as it used to be but at least you can check the source. How do you ask ai a question and trust the results. And why are you wasting all the resources ai uses. Like why why why
15 notes
·
View notes