#how to use gemini google
Explore tagged Tumblr posts
kamalkafir-blog · 17 days ago
Text
सिर्फ 3 स्टेप में तस्वीरें बन जाएंगी वीडियो! Google Gemini से करें कमाल, जानिए कैसे
[NEWS] Google Gemini: अगर आपके पास Android फोन है और आप उसमें Google Gemini का इस्तेमाल कर रहे हैं तो आपके लिए एक शानदार फीचर आ चुका है. अब Gemini की मदद से आप अपनी साधारण तस्वीरों को कुछ ही मिनटों में एक दमदार 8 सेकंड के वीडियो में बदल सकते हैं और वो भी ऑडियो के साथ. Google ने अपने ब्लॉग में बताया है कि यह नया फीचर उनके पावरफुल वीडियो जनरेशन मॉडल Veo 3 पर आधारित है जो फोटो को न सिर्फ एनिमेट…
0 notes
autumnoakes · 2 months ago
Text
okay but like here's the thing with chatgpt. outside of the negative environmental impact chatGPT and similar programs have, there is also a high chance that whatever you have it do for you is wrong. sometimes it is something small and inconsequential, but it can also be dangerously wrong. because what it does is it's looking on the internet for a source, and it will give you whatever source it finds (seemingly at random? i'm not 100% sure how chatGPT works but it seems very much like it's just picking something and spitting it out). i'm not even that old and i'm still an average college age but i remember commercials and ads telling people to not trust things you see on the internet because you don't know what is or isn't true. you might come across something that seems entirely legit, but is complete and total bullshit. that is something to keep in mind when using chatGPT that i think people don't consider. it may be wrong. it very likely is wrong. you won't be certain that it's wrong unless you ask it for its sources and examine them yourself which at that point you may as well just be doing your own research without AI anyways.
17 notes · View notes
mangled-by-disuse · 8 months ago
Text
it is just FASCINATING to me how the ads for Google Gemini seem entirely dedicated to "What's the single least useful thing we could suggest using GenAI for?"
Planning a date! Planning a holiday! Writing a cover letter for a job application! Designing an invitation for a Christmas dinner with your friends???
like I don't think there are that many good applications for this kind of genAI but if there are they sure as fuck aren't these
10 notes · View notes
ronanceautistic · 11 months ago
Text
I'm obsessed with the stupidity of those fucking gemini ads bruh. "Can your phone do this? No! You have to rely on your own stupid google search!" that's your search browser give me a fucking break😭
"gemini can translate other languages quickly and accurately!" so you admit google translate doesn't do that? op? do you admit it?
6 notes · View notes
bitbinders0 · 19 days ago
Text
Google AI Search Mode: How to Use Gemini AI in Search for Better Results (2025 Guide)
In 2025, Google Search is evolving with the introduction of Gemini AI, a groundbreaking technology designed to enhance the accuracy, speed, and relevance of search results. As businesses and content creators adapt to AI-powered search, understanding how Gemini AI works is crucial for staying competitive.
At BitBinders, we break down how Google's AI Search Mode, powered by Gemini AI, changes search optimization strategies. Whether you're a marketer, business owner, or SEO professional, this guide will help you optimize your content for better visibility and lead generation.
Learn how to align your SEO strategies with AI advancements by exploring our complete Gemini AI SEO guide.
Frequently Asked Questions (FAQs)
1. What is Gemini AI? Gemini AI is Google's latest artificial intelligence model integrated into its search engine. It enhances the understanding of user queries and delivers highly accurate, context-aware search results.
2. How does Google AI Search Mode work?
Google AI Search Mode uses Gemini AI to process search queries using advanced machine learning and natural language processing. It enables real-time analysis and delivers personalized search results based on user intent.
3. Why should SEO professionals care about Gemini AI? Gemini AI is shaping the future of SEO. By optimizing for AI-powered search engines, businesses can improve visibility, attract more traffic, and generate higher-quality leads.
4. How can I optimize my content for Gemini AI? Focus on: User intent Semantic SEO Structured data Mobile and page speed optimization Conversational content that answers specific questions
5. Is Gemini AI the future of Google Search? Yes. Gemini AI represents Google’s commitment to AI-powered search, making it critical for businesses and marketers to adapt their SEO strategies for long-term success.
0 notes
eravioli · 10 months ago
Text
I just started grad school this fall after a few years away from school and man I did not realize how dire the AI/LLM situation is in universities now. In the past few weeks:
I chatted with a classmate about how it was going to be a tight timeline on a project for a programming class. He responded "Yeah, at least if we run short on time, we can just ask chatGPT to finish it for us"
One of my professors pulled up chatGPT on the screen to show us how it can sometimes do our homework problems for us and showed how she thanks it after asking it questions "in case it takes over some day."
I asked one of my TAs in a math class to explain how a piece of code he had written worked in an assignment. He looked at it for about 15 seconds then went "I don't know, ask chatGPT"
A student in my math group insisted he was right on an answer to a problem. When I asked where he got that info, he sent me a screenshot of Google gemini giving just blatantly wrong info. He still insisted he was right when I pointed this out and refused to click into any of the actual web pages.
A different student in my math class told me he pays $20 per month for the "computational" version of chatGPT, which he uses for all of his classes and PhD research. The computational version is worth it, he says, because it is wrong "less often". He uses chatGPT for all his homework and can't figure out why he's struggling on exams.
There's a lot more, but it's really making me feel crazy. Even if it was right 100% of the time, why are you paying thousands of dollars to go to school and learn if you're just going to plug everything into a computer whenever you're asked to think??
32K notes · View notes
trading-attitude · 5 months ago
Text
youtube
18 outils d’IA que vous devez ABSOLUMENT connaître en 2025 ! 🧠🔥
🤖 L’intelligence artificielle évolue à une vitesse folle… et si vous pouviez en tirer profit dès maintenant ? Dans cette vidéo, découvrez 18 outils d’IA incontournables (+ 2 bonus) qui vont révolutionner votre façon de travailler et booster votre productivité !
📌 Au programme :
✅ Des outils IA pour automatiser vos tâches 📊
✅ Des assistants intelligents pour optimiser votre workflow 💡
✅ Des solutions pour rédiger, créer et analyser plus vite que jamais
🚀 Que vous soyez entrepreneur, créateur de contenu ou étudiant, ces 18 outils IA vous feront gagner un temps précieux et amélioreront votre efficacité au quotidien !
🔥 Ne passez pas à côté des meilleures innovations IA ! Regardez la vidéo maintenant et adoptez ces outils dès aujourd’hui !
0 notes
killyjae · 7 months ago
Text
I miss the feeling of not worrying if anything corporate looks off because I suspect it's ai generated
1 note · View note
thepixel12 · 1 year ago
Text
How Can Google Gemini Be Used Effectively For Advanced Data Analytics?
Top fast-paced world of digital today, data represents precious metal. But what is the use of precious if you can’t take it out? Tools such as Google Gemini come into play by providing a very strong and flexible system for converting raw information into recoverable insights. If you wondering how can Google Gemini be useful for advanced data analysis? This is exactly what we are going to explore in this post; tips on using this particular tool to enhance your game.
Ø Google Gemini
Before we jump into matters, it is important to explain first what Google Gemini is all about. A next-generation machine learning (ML) platform that has been created by Google in order to help individuals working with huge datasets with their management, processing, and analysis functions.
It does not matter whether you are a data scientist, analyst, or just someone trying to extract useful information from complex datasets; Google Gemini has everything that provides assistance on this matter. Therefore make it possible for you to focus on actual analysis rather than getting lost in details by making this process a lot easier.
· Improving prediction analytics with TensorFlow
· Real-time data processing with streaming capabilities
· Enhancing data visualization with simplifying communication
· User-friendly collaboration for whole workflows
· Scaling analytics efforts
· Automating routine tasks
Ø Improving prediction analytics with TensorFlow
Predictive analytics hinges on historical data to predict future trends; in this aspect, Google’s Gemini is very proficient. Additionally, Google’s Gemini incorporates TensorFlow which is an open-source ML library thus giving access to pre-built models that can be modified for your requirements. The Pixel uses predictive analytics saves time and also increases accuracy as well as efficiency in your predictive analytics.
Ø Real-time data processing with streaming capabilities
In a world where information is fluid with every second that passes by, this process by which data gets processed as soon as it’s received has become very important. The streaming capabilities of Google Gemini make it excellent for those situations where current results are vital. It doesn’t matter if you are looking at how stocks rise and fall over time or just tracing what people post online at any one moment; using this type of data processing with Gemini allows you to jump into action before making a mistake.
Ø Enhancing data visualization with simplifying communication
A variety of data visualization tools are available within Google Gemini that let one create interactive charts, graphs, and dashboards. Such visuals can be effortlessly shared amongst stakeholders thereby simplifying communication about findings hence boosting decision-making based on facts. Also highly customizable, Gemini’s visualization tools enable one to tailor the visuals for their targeted audience.
Ø User-friendly collaboration for whole workflows
Usually, data analytics is not a one-person task and needs different teams to work together leveraging their individual expertise. With Google Gemini’s cloud-based platform, which allows many users to manipulate the same datasets at once, this collaboration is made easier. Models, insights, or even whole workflows can be shared with your team so that it becomes simpler for you all to be on the same page.
Ø Scaling analytics efforts
The data in your organization is likely to increase with its growth and The Pixel can help you with the scaling process. The cloud-based framework of the platform guarantees that you can simply increase your analytical abilities whether working on terabytes or even petabytes worth of information. Moreover, Gemini has a pricing structure that permits you to pay only for the resources you use.
Ø Automating routine tasks
An important consideration when working on analytic data is that some of the functions can be handled by machines. One such machine is Google Gemini with it some of the functions can be automated like model training or making reports without having hands come into contact with the machine at any time hence saving labor time and costs associated with labor. In this way there is faster processing of data; furthermore automatizing also protects humans from making mistakes hence having accurate results.
0 notes
neuronimbusau · 1 year ago
Text
1 note · View note
hotdigitallegend · 2 months ago
Text
astro observations that feel like dropping your phone on your face // neural downloads 🌬️
• aries can suprisingly be very monk-like, like i will not speak for three days but i will build a table without nails. they’re childlike but can be very wise. people get confused by this. the idea people have of them can make them feel smaller than they are. also this depends on where mercury is. if it’s in pisces, then they’re probably more on the silent side but with fantastical imaginations.
• pisces men are like that™ because the world bullies the softness out of them. deep down they want to do things like cry at sunset but then that becomes “get a job!!” they’re trying to merge with the divine but it translates as bad communication skills and spotify playlists.
• aquarius placements get their phone in hand, suddenly their brain goes into in orbit. they’re quite literally addicted to scrolling and watching. leo’s are also on their phone but mainly using the front camera or socializing. they just learned how to Shazam a song. and have like 7 apps.
• cardinal signs had a five year romance plan by the 10th grade. aries had an ideal type and didn’t budge until they found it. cancer was naming the kids first and foremost. capricorn scheduled the wedding. libra made a mood board for it, and an ideal traits note. they treat it like shopping.
• sag venus falls in love in 3 seconds and out in 2 - it’s like teleportation 🤣
• scorpio mercury says “i’m fine” with the same energy as a someone holding a loaded g*n. they’re lying!!
• gemini mars loves a verbal foreplay olympics. flirty texts, three side convos, and they love for you to guess what they meant.
• virgo risings idea of fun is fixing your life while ignoring their own mental breakdown. theyll load your dishwasher while trying not to cry
• taurus rising could sell you dirt and you’d be satisfied
• cancer mercury remembers everything especially that one thing you said in 2019 at brunch. and they forgave you…..or did they
• north node conjunct mercury means your destiny involves a lot of talking. like more than you probably want, but hey!!
• air signs fall in love after lurking on your google search history. earth signs love to see you working in your element. water signs want your birth time and for you to just…. sit on the ground with them. fire signs just want an unlimited pass to touch your face in public.
• sag moons/risings whole concept is basically i’m not avoiding my feelings i’m just traveling to a country where they can’t find me
• libra venus/moon flirt by asking deep questions and mirroring your exact personality. “do you like this?” **shifts entire identity to match**
966 notes · View notes
venusveil · 3 months ago
Text
Astrology observations
(facts about your placement)
Tumblr media Tumblr media
Tumblr media
☕︎︎I love Taurus moons they're the "Comfort" in human form. Probably smells like vanilla and security. Love it.
☕︎︎ Sagittarius Rising Walks into the room and changes the fkn vibe. Big "main character at the airport" energy.
☕︎︎ Venus in Gemin will say "I love you"… to four different people. In one week. Flirty texts. commitment nowhere in sight. It’s giving ✨emotional ADHD✨.
☕︎︎ A Moon in Aquarius will ghost you mid-breakdown and say "I needed space to process." Sir. I was crying on your porch.
Tumblr media
☕︎︎ for a Leo mars Everything’s a performance. Every argument ends with "AND ANOTHER THING" stfu and Sit down.
☕︎︎ I actually met a Aries mercury in real life and god Zero filter. Will start beef at a funeral. Thinks "I’m just being honest" is a personality trait.
☕︎︎ Gemini Mars people are Always horny. Always chaotic. Will argue during sex and mean it. ☺︎
☕︎︎ Loving a scorpio moon is a risk. Loving them and hurting them? That’s your villain origin story. You’re never emotionally safe again. They’ll stalk you while you think no one’s watching but trust—they know everything. (my ex best friend is a scorpio moon I feel so bad for her exes) :)
☕︎︎Libra Venus are So sweet, so charming, so unavailable. They’ll string you along with cute texts and then say "but I never promised anything…" soo ME.
Tumblr media
☕︎︎LEO RISING, Hot? Yes. Attention-seeking? Also yes. Will step on your neck for compliments. Bye.
☕︎︎ Saturn in the 1st House are Born tired. Probably aging backwards. Looks at joy like it’s suspicious. Says "no" for sport.
☕︎︎ your Aries Lilith partner is - Chaotic. Loud. Might slap you (while doing it :/ ) and call it foreplay. You didn’t know you were into that till now. Congrats on your new kink.
☕︎︎Mars in Sagittarius are Down for anything. Probably suggested something you had to Google. Hot? Yes. Emotionally present? Absolutely not.
☕︎︎ People really don’t get how hard it is to have Libra Mars and Saturn in the 10th. Deep down I know I’m supposed to build legacy be That Girl but my Mars is like "Can’t we just be hot, be adored while doing nothing but existing prettily?"
The internal war is exhausting. : )
Tumblr media
☕︎︎ the placement I just don't like is Aquarius Mercury (sorry no sorry) :/ They’ll text you "what even is love?" then disappear for 4 days.
Their flirting style is confusing you until you fall in love. Can explain quantum physics but not their feelings.
☕︎︎ So you're a Gemini? I used to defend them like it was my full-time job. I was ready to fight the whole astrology community proving how wrong they are about gemini.
Until "THE" Gemini proved me wrong and turned me into a proud member of the Gemini-hater club.
If you're a Gemini —listen you’re amazing. Great friend. Maybe a amazing sibling. But as a lover? NO. Just no. I’d honestly lock you in a room with a Taurus or Scorpio if I could and let the psychological warfare begin. Hehe. :/
☕︎︎ Libra placements. If we don't see ourselves being hot? It doesn't count.
.............. ♡
1K notes · View notes
bi-writes · 1 year ago
Note
whats wrong with ai?? genuinely curious <3
okay let's break it down. i'm an engineer, so i'm going to come at you from a perspective that may be different than someone else's.
i don't hate ai in every aspect. in theory, there are a lot of instances where, in fact, ai can help us do things a lot better without. here's a few examples:
ai detecting cancer
ai sorting recycling
some practical housekeeping that gemini (google ai) can do
all of the above examples are ways in which ai works with humans to do things in parallel with us. it's not overstepping--it's sorting, using pixels at a micro-level to detect abnormalities that we as humans can not, fixing a list. these are all really small, helpful ways that ai can work with us.
everything else about ai works against us. in general, ai is a huge consumer of natural resources. every prompt that you put into character.ai, chatgpt? this wastes water + energy. it's not free. a machine somewhere in the world has to swallow your prompt, call on a model to feed data into it and process more data, and then has to generate an answer for you all in a relatively short amount of time.
that is crazy expensive. someone is paying for that, and if it isn't you with your own money, it's the strain on the power grid, the water that cools the computers, the A/C that cools the data centers. and you aren't the only person using ai. chatgpt alone gets millions of users every single day, with probably thousands of prompts per second, so multiply your personal consumption by millions, and you can start to see how the picture is becoming overwhelming.
that is energy consumption alone. we haven't even talked about how problematic ai is ethically. there is currently no regulation in the united states about how ai should be developed, deployed, or used.
what does this mean for you?
it means that anything you post online is subject to data mining by an ai model (because why would they need to ask if there's no laws to stop them? wtf does it matter what it means to you to some idiot software engineer in the back room of an office making 3x your salary?). oh, that little fic you posted to wattpad that got a lot of attention? well now it's being used to teach ai how to write. oh, that sketch you made using adobe that you want to sell? adobe didn't tell you that anything you save to the cloud is now subject to being used for their ai models, so now your art is being replicated to generate ai images in photoshop, without crediting you (they have since said they don't do this...but privacy policies were never made to be human-readable, and i can't imagine they are the only company to sneakily try this). oh, your apartment just installed a new system that will use facial recognition to let their residents inside? oh, they didn't train their model with anyone but white people, so now all the black people living in that apartment building can't get into their homes. oh, you want to apply for a new job? the ai model that scans resumes learned from historical data that more men work that role than women (so the model basically thinks men are better than women), so now your resume is getting thrown out because you're a woman.
ai learns from data. and data is flawed. data is human. and as humans, we are racist, homophobic, misogynistic, transphobic, divided. so the ai models we train will learn from this. ai learns from people's creative works--their personal and artistic property. and now it's scrambling them all up to spit out generated images and written works that no one would ever want to read (because it's no longer a labor of love), and they're using that to make money. they're profiting off of people, and there's no one to stop them. they're also using generated images as marketing tools, to trick idiots on facebook, to make it so hard to be media literate that we have to question every single thing we see because now we don't know what's real and what's not.
the problem with ai is that it's doing more harm than good. and we as a society aren't doing our due diligence to understand the unintended consequences of it all. we aren't angry enough. we're too scared of stifling innovation that we're letting it regulate itself (aka letting companies decide), which has never been a good idea. we see it do one cool thing, and somehow that makes up for all the rest of the bullshit?
1K notes · View notes
mariacallous · 1 month ago
Text
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.
Eugene, serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.
When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
As we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.
“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.
Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.
“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”
I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.
It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.
There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”
Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.
Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.
But for English departments, and for college writing in general, the arrival of A.I. has been more vexed. Why bother teaching writing now? The future of the midterm essay may be a quaint worry compared with larger questions about the ramifications of artificial intelligence, such as its effect on the environment, or the automation of jobs. And yet has there ever been a time in human history when writing was so important to the average person? E-mails, texts, social-media posts, angry missives in comments sections, customer-service chats—let alone one’s actual work. The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of “mass writing.” It’s possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.
Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism. Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.
Robin told me he finds many of the steps that universities have taken to combat A.I. essays to be “hand-holding that’s not leading people anywhere.” He has become a believer in the passage-identification blue-book exam, in which students name and contextualize excerpts of what they’ve read for class. “Know the text and write about it intelligently,” he said. “That was a way of honoring their autonomy without being a cop.”
His daughter, who is now a senior, complains that her teachers rarely assign full books. And Robin has noticed that college students are more comfortable with excerpts than with entire articles, and prefer short stories to novels. “I don’t get the sense they have the kind of literary or cultural mastery that used to be the assumption upon which we assigned papers,” he said. One study, published last year, found that fifty-eight per cent of students at two Midwestern universities had so much trouble interpreting the opening paragraphs of “Bleak House,” by Charles Dickens, that “they would not be able to read the novel on their own.” And these were English majors.
The return to pen and paper has been a common response to A.I. among professors, with sales of blue books rising significantly at certain universities in the past two years. Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.
But other professors have renewed their emphasis on getting students to see the value of process. Dan Melzer, the director of the first-year composition program at the University of California, Davis, recalled that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think about how writing functions across the curriculum so that all students, from prospective scientists to future lawyers, get a chance to hone their prose. Consequently, he has an accommodating view of how norms around communication have changed, especially in the internet age. He was sympathetic to kids who viewed some of their assignments as dull and mechanical and turned to ChatGPT to expedite the process. He called the five-paragraph essay—the classic “hamburger” structure, consisting of an introduction, three supporting body paragraphs, and a conclusion—“outdated,” having descended from élitist traditions.
Melzer believes that some students loathe writing because of how it’s been taught, particularly in the past twenty-five years. The No Child Left Behind Act, from 2002, instituted standards-based reforms across all public schools, resulting in generations of students being taught to write according to rigid testing rubrics. As one teacher wrote in the Washington Post in 2013, students excelled when they mastered a form of “bad writing.” Melzer has designed workshops that treat writing as a deliberative, iterative process involving drafting, feedback (from peers and also from ChatGPT), and revision.
“If you assign a generic essay topic and don’t engage in any process, and you just collect it a month later, it’s almost like you’re creating an environment tailored to crime,” he said. “You’re encouraging crime in your community!”
I found Melzer’s pedagogical approach inspiring; I instantly felt bad for routinely breaking my class into small groups so that they could “workshop” their essays, as though the meaning of this verb were intuitively clear. But, as a student, I’d have found Melzer’s focus on process tedious—it requires a measure of faith that all the work will pay off in the end. Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory. “You can imagine that students really want to be there,” he joked.
College is all about opportunity costs. One way of viewing A.I. is as an intervention in how people choose to spend their time. In the early nineteen-sixties, college students spent an estimated twenty-four hours a week on schoolwork. Today, that figure is about fifteen, a sign, to critics of contemporary higher education, that young people are beneficiaries of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the diligence of their forebears. I don’t know how many hours I spent on schoolwork in the late nineties, when I was in college, but I recall feeling that there was never enough time. I suspect that, even if today’s students spend less time studying, they don’t feel significantly less stressed. It’s the nature of campus life that everyone assimilates into a culture of busyness, and a lot of that anxiety has been shifted to extracurricular or pre-professional pursuits. A dean at Harvard remarked that students feel compelled to find distinction outside the classroom because they are largely indistinguishable within it.
Eddie, a sociology major at Long Beach State, is older than most of his classmates. He graduated high school in 2010, and worked full time while attending a community college. “I’ve gone through a lot to be at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his therapist recommended to him, was ubiquitous at Long Beach even before the California State University system, which Long Beach is a part of, announced a partnership with OpenAI, giving its four hundred and sixty thousand students access to ChatGPT Edu. “I was a little suspicious of how convenient it was,” Eddie said. “It seemed to know a lot, in a way that seemed so human.”
He told me that he used A.I. “as a brainstorm” but never for writing itself. “I limit myself, for sure.” Eddie works for Los Angeles County, and he was talking to me during a break. He admitted that, when he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to do things ethically, but if I’m rushing to work I don’t feel bad about that,” particularly for courses outside his major.
I recognized Eddie’s conflict. I’ve used ChatGPT a handful of times, and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency. I’ve felt the need to stop myself from indulging in idle queries. Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager. And for students who use it the way Eddie did, as a kind of sounding board, there’s no clear threshold where the work ceases to be an original piece of thinking. In April, Anthropic, the company behind Claude, released a report drawn from a million anonymized student conversations with its chatbots. It suggested that more than half of user interactions could be classified as “collaborative,” involving a dialogue between student and A.I. (Presumably, the rest of the interactions were more extractive.)
May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes. “I don’t think it’s made me a worse writer,” she said. “It’s perhaps made me a less patient writer. I used to spend hours writing essays, nitpicking over my wording, really thinking about how to phrase things.” College had made her reflect on her experience at an extremely competitive high school, where she had received top grades but retained very little knowledge. As a result, she was the rare student who found college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen her engagement with the courses she felt passionate about. “I was trying to think, Where’s all this time going?” she said. I had never envied a college student until she told me the answer: “I sleep more now.”
Harry Stecopoulos oversees the University of Iowa’s English department, which has more than eight hundred majors. On the first day of his introductory course, he asks students to write by hand a two-hundred-word analysis of the opening paragraph of Ralph Ellison’s “Invisible Man.” There are always a few grumbles, and students have occasionally walked out. “I like the exercise as a tone-setter, because it stresses their writing,” he told me.
The return of blue-book exams might disadvantage students who were encouraged to master typing at a young age. Once you’ve grown accustomed to the smooth rhythms of typing, reverting to a pen and paper can feel stifling. But neuroscientists have found that the “embodied experience” of writing by hand taps into parts of the brain that typing does not. Being able to write one way—even if it’s more efficient—doesn’t make the other way obsolete. There’s something lofty about Stecopoulos’s opening-day exercise. But there’s another reason for it: the handwritten paragraph also begins a paper trail, attesting to voice and style, that a teaching assistant can consult if a suspicious paper is submitted.
Kevin, a third-year student at Syracuse University, recalled that, on the first day of a class, the professor had asked everyone to compose some thoughts by hand. “That brought a smile to my face,” Kevin said. “The other kids are scratching their necks and sweating, and I’m, like, This is kind of nice.”
Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I. to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said. Like many former holdouts, Kevin felt that his judicious use of A.I. was more defensible than his peers’ use of it.
As ChatGPT begins to sound more human, will we reconsider what it means to sound like ourselves? Kevin and some of his friends pride themselves on having an ear attuned to A.I.-generated text. The hallmarks, he said, include a preponderance of em dashes and a voice that feels blandly objective. An acquaintance had run an essay that she had written herself through a detector, because she worried that she was starting to phrase things like ChatGPT did. He read her essay: “I realized, like, It does kind of sound like ChatGPT. It was freaking me out a little bit.”
A particularly disarming aspect of ChatGPT is that, if you point out a mistake, it communicates in the backpedalling tone of a contrite student. (“Apologies for the earlier confusion. . . .”) Its mistakes are often referred to as hallucinations, a description that seems to anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant. Some professors told me that they had students fact-check ChatGPT’s work, as a way of discussing the importance of original research and of showing the machine’s fallibility. Hallucination rates have grown worse for most A.I.s, with no single reason for the increase. As a researcher told the Times, “We still don’t know how these models work exactly.”
But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs. Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said. Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”
Kevin, by contrast, desired a more general kind of moral distinction. I asked if he would be bothered to receive a lower grade on an essay than a classmate who’d used ChatGPT. “Part of me is able to compartmentalize and not be pissed about it,” he said. “I developed myself as a human. I can have a superiority complex about it. I learned more.” He smiled. But then he continued, “Part of me can also be, like, This is so unfair. I would have loved to hang out with my friends more. What did I gain? I made my life harder for all that time.”
In my conversations, just as college students invariably thought of ChatGPT as merely another tool, people older than forty focussed on its effects, drawing a comparison to G.P.S. and the erosion of our relationship to space. The London cabdrivers rigorously trained in “the knowledge” famously developed abnormally large posterior hippocampi, the part of the brain crucial for long-term memory and spatial awareness. And yet, in the end, most people would probably rather have swifter travel than sharper memories. What is worth preserving, and what do we feel comfortable off-loading in the name of efficiency?
What if we take seriously the idea that A.I. assistance can accelerate learning—that students today are arriving at their destinations faster? In 2023, researchers at Harvard introduced a self-paced A.I. tutor in a popular physics course. Students who used the A.I. tutor reported higher levels of engagement and motivation and did better on a test than those who were learning from a professor. May, the Georgetown student, told me that she often has ChatGPT produce extra practice questions when she’s studying for a test. Could A.I. be here not to destroy education but to revolutionize it? Barry Lam teaches in the philosophy department at the University of California, Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies philosophical modes of inquiry to everyday topics. He began wondering what it would mean for A.I. to actually be a productivity tool. He spoke to me from the podcast studio he built in his shed. “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand? If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.
“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.
Lam doesn’t find the use of A.I. morally indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued, because there’s technically no original version. Rather, he finds it a potential waste of everyone’s time. At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”
Nobody gets into teaching because he loves grading papers. I talked to one professor who rhapsodized about how much more his students were learning now that he’d replaced essays with short exams. I asked if he missed marking up essays. He laughed and said, “No comment.” An undergraduate at Northeastern University recently accused a professor of using A.I. to create course materials; she filed a formal complaint with the school, requesting a refund for some of her tuition. The dustup laid bare the tension between why many people go to college and why professors teach. Students are raised to understand achievement as something discrete and measurable, but when they arrive at college there are people like me, imploring them to wrestle with difficulty and abstraction. Worse yet, they are told that grades don’t matter as much as they did when they were trying to get into college—only, by this point, students are wired to find the most efficient path possible to good marks.
As the craft of writing is degraded by A.I., original writing has become a valuable resource for training language models. Earlier this year, a company called Catalyst Research Alliance advertised “academic speech data and student papers” from two research studies run in the late nineties and mid-two-thousands at the University of Michigan. The school asked the company to halt its work—the data was available for free to academics anyway—and a university spokesperson said that student data “was not and has never been for sale.” But the situation did lead many people to wonder whether institutions would begin viewing original student work as a potential revenue stream.
According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined since 2012. An assessment of tens of thousands of adults in nearly thirty countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.E.C.D., hypothesized that the way we consume information today—often through short social-media posts—has something to do with the decline in literacy. (One of Europe’s top performers in the assessment was Estonia, which recently announced that it will bring A.I. to some high-school students in the next few years, sidelining written essays and rote homework exercises in favor of self-directed learning and oral exams.)
Lam, the philosophy professor, used to be a colleague of mine, and for a brief time we were also neighbors. I’d occasionally look out the window and see him building a fence, or gardening. He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced that there is value to learning how to do things the annoying, old-fashioned, and—as he puts it—“artisanal” way. He told me that his wife, Shanna Andrawis, who has been a high-school teacher since 2008, frequently disagreed with his cavalier methods for dealing with large learning models. Andrawis argues that dishonesty has always been an issue. “We are trying to mass educate,” she said, meaning there’s less room to be precious about the pedagogical process. “I don’t have conversations with students about ‘artisanal’ writing. But I have conversations with them about our relationship. Respect me enough to give me your authentic voice, even if you don’t think it’s that great. It’s O.K. I want to meet you where you’re at.”
Ultimately, Andrawis was less fearful of ChatGPT than of the broader conditions of being young these days. Her students have grown increasingly introverted, staring at their phones with little desire to “practice getting over that awkwardness” that defines teen life, as she put it. A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s “a little cherry on top of an already really bad ice-cream sundae,” she said.
When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.
Except for a year spent bumming around my home town, I’ve basically been on a campus for the past thirty years. Students these days view college as consumers, in ways that never would have occurred to me when I was their age. They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking. Although I’ve empathized with my students’ various mini-dramas, I rarely project myself into their lives. I notice them noticing one another, and I let the mysteries of their lives go. Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.
Education, particularly in the humanities, rests on a belief that, alongside the practical things students might retain, some arcane idea mentioned in passing might take root in their mind, blossoming years in the future. A.I. allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human. I often tell my students that this is the last time in their lives that someone will have to read something they write, so they might as well tell me what they actually think.
Despite all the current hysteria around students cheating, they aren’t the ones to blame. They did not lobby for the introduction of laptops when they were in elementary school, and it’s not their fault that they had to go to school on Zoom during the pandemic. They didn’t create the A.I. tools, nor were they at the forefront of hyping technological innovation. They were just early adopters, trying to outwit the system at a time when doing so has never been so easy. And they have no more control than the rest of us. Perhaps they sense this powerlessness even more acutely than I do. One moment, they are being told to learn to code; the next, it turns out employers are looking for the kind of “soft skills” one might learn as an English or a philosophy major. In February, a labor report from the Federal Reserve Bank of New York reported that computer-science majors had a higher unemployment rate than ethnic-studies majors did—the result, some believed, of A.I. automating entry-level coding jobs.
None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.
When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village. He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus. 
374 notes · View notes
copperbadge · 1 year ago
Text
I have a lot of feelings about the use of AI in Everything These Days, but they're not particularly strong feelings, like I've got other shit going on. That said, when I use a desktop computer, every single file I use in Google Drive now has a constant irritating popup on the right-hand side asking me how Gemini AI Can Help Me. You can't, Gemini. You are in the way. I'm not even mad there's an AI there, I'm mad there's a constantly recurring popup taking up space and attention on my screen.
Here's the problem, however: even Gemini doesn't know how to disable Gemini. I did my own research and then finally, with a deep appreciation of the irony of this, I asked it how to turn it off. It said in any google drive file go to Help > Gemini and there will be an option to turn it off. Guess what isn't a menu item under Help?
I've had a look around at web tutorials for removing or blocking it, but they are either out of date or for the Gemini personal assistant, which I already don't have, and thus cannot turn off. Gemini for Drive is an integrated "service" within Google Drive, which I guess means I'm going to have to look into moving off Google Drive.
So, does anyone have references for a service as seamless and accessible as Google Drive? I need document, spreadsheet, slideshow, and storage, but I don't have any fancy widgets installed or anything. I do technically own Microsoft Office so I suppose I could use that but I've never found its cloud function to actually, uh, function. I could use OneNote for documents if things get desperate but OneNote is very limited overall. I want to be able to open and edit files, including on an Android phone, and I'd prefer if I didn't have to receive a security code in my text messages every time I log in. I also will likely need to be able to give non-users access, but I suppose I could kludge that in Drive as long as I only have to deal with it short-term.
Any thoughts, friends? If I find a good functional replacement I'm happy to post about it once I've tested it.
Also, saying this because I love you guys but if I don't spell it out I will get a bunch of comments about it: If you yourself have managed to banish Gemini from your Drive account including from popping up in individual files, I'm interested! Please share. If you have not actually implemented a solution yourself, rest assured, anything you find I have already tried and it does not work.
1K notes · View notes
ralfmaximus · 18 days ago
Text
“In the fall, I took an introductory coding class. We were learning Python, and we were running and testing lines on Google’s Gemini Code Assist,” she said. “It would ask me a question, I would fill in a blank, and then it would autofill the rest of the line. I wasn’t learning anything.” When she tried opting out of the autofill so she could actually get some hands-on practice, Rosenstock found that the option to turn it off was buried deep in Gemini’s settings—an annoying but typical tendency when it comes to Google’s A.I. software. “And our grad instructor was very lax about it. You could use it for answering a whole question,” Rosenstock added. “They didn’t tell us not to use it too much, which was very strange to me.”
Many schools actively encourage the use of AI to complete lessons, which is upsetting for some students. Instructors are using it too, leading to situations where the student might receive incorrect, partially hallucinated feedback.
Imagine going into debt for an education and coming away from it with nothing. No skills, except how to give prompts to ChatGPT.
It's good to see students push back! More of that please.
258 notes · View notes