#essay on artificial intelligence in english
Explore tagged Tumblr posts
Video
youtube
Essay on Artificial Intelligence in English | Write a Paragraph on Artificial Intelligence Paragraph
My handwriting is about an essay on Artificial Intelligence in English. So I decided to write a paragraph on Artificial Intelligence (AI). The artificial Intelligence paragraph for class 9 is important. Get an idea about Artificial Intelligence Essay in English in a few lines on Artificial Intelligence. AI Artificial Intelligence paragraph means the role of Artificial Intelligence paragraph in English. Artificial Intelligence paragraph writing is very easy. Try the artificial intelligence paragraph in 200 words. Practice Artificial Intelligence paragraph HSC 150 words.
10 Lines on Artificial Intelligence (AI):
1. With Artificial Intelligence (AI) means making machines smart like humans.
2. AI helps machines to think, learn, and do tasks on behalf of humans.
3. AI is used in everyday things like phones, computers, and apps.
4. Humans benefit from using AI.
5. AI helps doctors to find diseases and give better treatments.
6. It is used in self-driving cars to help them move safely.
7. AI can suggest movies, songs, or products we might like.
8. It can understand speech and recognize faces in photos.
9. Some people worry that AI might take away jobs.
10. If used wisely, AI can help solve many problems in the world.
#artificialintelligence #artificialintelligencetechnology #handwriting #handwritingtips #paragraph #paragraph_writing #paragraph_suggestions #paragraph_short #paragraphformat #paragraphwriting #handwritingpractice #paragraphwritingformat #paragraphwritinginenglish
#youtube#essay on artificial intelligence in english#artificial intelligence paragraph#artificial intelligence paragraph class 9#artificial intelligence essay in english#write a paragraph on artificial intelligence#few lines on artificial intelligence#role of artificial intelligence paragraph#artificial intelligence paragraph in english#artificial intelligence paragraph easy#artificial intelligence paragraph writing#artificial intelligence paragraph hsc
1 note
·
View note
Text
"I asked ChatGPT-" Why not just Google it? And not read the Gemini AI summary at the top but just... actually Google it. Just like... learn the information that you want to know instead of having to have the robot put it all in a neat little wrapper for you like you're a helpless child.
Like seriously every time someone tells me they ChatGPT-ed something it just makes me think of how we have all the information we could ever want at our fingertips to read and absorb and think about at all times, but they have to have the robot chew it up for them and vomit it out. Sometimes it isn't even right. What if you just Googled whatever you need to know, click on a link to go and read an article or something, and maybe you'll learn even more than you bargained for! But no, you want the AI to waste a gallon of water trying to compute whatever you said and then regurgitate whatever you would find through a simple search anyway.
#seriously#i'm wrote an essay on ai and students using it for my final paper in english#and like the reasons that people use it...#it seems like you could just use a basic search engine for half of it.#“i need one-on-one learning time” okay#khan academy#go to your actual teachers who will actually teach you if asked#“I wanna fact-check this”#that's literally what searching for things is for.#“i need to write a summary”#okay... a summary is literally SMALLER than what you just read. as long as you READ IT then you can write a summary in half the time.#gets me heated#ai#artificial intelligence#chatgpt#llm#anti genai#gen ai hate#generative ai#ai that helps us find new cures for diseases or new ways to predict them is great#that stuff needs to keep going#just a btw because yk...#nuance
11 notes
·
View notes
Text
Ew ew ew I went to Reddit (yeah yeah I know, shh) looking for writing and story planning app/program recommendations and people are recommending A.I. shit why why why
#NO#i do not WANT to give skynet my story#im so sick of this so-called 'A.I.' stuff someone ask me about my beef with A.I. and ill give you a fuckin ESSAY#true 'artificial intelligence' DOES NOT EXIST yet#these are just REALLY good fucking algorithms but theyre FALLIBLE Because theyre made and trained by PEOPLE#grrrr#theres a REASON i bailed out of my computer programming college course and didnt pursue that career path man#the moth that learned english#ai#ai ethics
1 note
·
View note
Text
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.
Eugene, serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.
When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
As we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.
“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.
Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.
“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”
I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.
It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.
There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”
Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.
Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.
But for English departments, and for college writing in general, the arrival of A.I. has been more vexed. Why bother teaching writing now? The future of the midterm essay may be a quaint worry compared with larger questions about the ramifications of artificial intelligence, such as its effect on the environment, or the automation of jobs. And yet has there ever been a time in human history when writing was so important to the average person? E-mails, texts, social-media posts, angry missives in comments sections, customer-service chats—let alone one’s actual work. The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of “mass writing.” It’s possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.
Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism. Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.
Robin told me he finds many of the steps that universities have taken to combat A.I. essays to be “hand-holding that’s not leading people anywhere.” He has become a believer in the passage-identification blue-book exam, in which students name and contextualize excerpts of what they’ve read for class. “Know the text and write about it intelligently,” he said. “That was a way of honoring their autonomy without being a cop.”
His daughter, who is now a senior, complains that her teachers rarely assign full books. And Robin has noticed that college students are more comfortable with excerpts than with entire articles, and prefer short stories to novels. “I don’t get the sense they have the kind of literary or cultural mastery that used to be the assumption upon which we assigned papers,” he said. One study, published last year, found that fifty-eight per cent of students at two Midwestern universities had so much trouble interpreting the opening paragraphs of “Bleak House,” by Charles Dickens, that “they would not be able to read the novel on their own.” And these were English majors.
The return to pen and paper has been a common response to A.I. among professors, with sales of blue books rising significantly at certain universities in the past two years. Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.
But other professors have renewed their emphasis on getting students to see the value of process. Dan Melzer, the director of the first-year composition program at the University of California, Davis, recalled that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think about how writing functions across the curriculum so that all students, from prospective scientists to future lawyers, get a chance to hone their prose. Consequently, he has an accommodating view of how norms around communication have changed, especially in the internet age. He was sympathetic to kids who viewed some of their assignments as dull and mechanical and turned to ChatGPT to expedite the process. He called the five-paragraph essay—the classic “hamburger” structure, consisting of an introduction, three supporting body paragraphs, and a conclusion—“outdated,” having descended from élitist traditions.
Melzer believes that some students loathe writing because of how it’s been taught, particularly in the past twenty-five years. The No Child Left Behind Act, from 2002, instituted standards-based reforms across all public schools, resulting in generations of students being taught to write according to rigid testing rubrics. As one teacher wrote in the Washington Post in 2013, students excelled when they mastered a form of “bad writing.” Melzer has designed workshops that treat writing as a deliberative, iterative process involving drafting, feedback (from peers and also from ChatGPT), and revision.
“If you assign a generic essay topic and don’t engage in any process, and you just collect it a month later, it’s almost like you’re creating an environment tailored to crime,” he said. “You’re encouraging crime in your community!”
I found Melzer’s pedagogical approach inspiring; I instantly felt bad for routinely breaking my class into small groups so that they could “workshop” their essays, as though the meaning of this verb were intuitively clear. But, as a student, I’d have found Melzer’s focus on process tedious—it requires a measure of faith that all the work will pay off in the end. Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory. “You can imagine that students really want to be there,” he joked.
College is all about opportunity costs. One way of viewing A.I. is as an intervention in how people choose to spend their time. In the early nineteen-sixties, college students spent an estimated twenty-four hours a week on schoolwork. Today, that figure is about fifteen, a sign, to critics of contemporary higher education, that young people are beneficiaries of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the diligence of their forebears. I don’t know how many hours I spent on schoolwork in the late nineties, when I was in college, but I recall feeling that there was never enough time. I suspect that, even if today’s students spend less time studying, they don’t feel significantly less stressed. It’s the nature of campus life that everyone assimilates into a culture of busyness, and a lot of that anxiety has been shifted to extracurricular or pre-professional pursuits. A dean at Harvard remarked that students feel compelled to find distinction outside the classroom because they are largely indistinguishable within it.
Eddie, a sociology major at Long Beach State, is older than most of his classmates. He graduated high school in 2010, and worked full time while attending a community college. “I’ve gone through a lot to be at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his therapist recommended to him, was ubiquitous at Long Beach even before the California State University system, which Long Beach is a part of, announced a partnership with OpenAI, giving its four hundred and sixty thousand students access to ChatGPT Edu. “I was a little suspicious of how convenient it was,” Eddie said. “It seemed to know a lot, in a way that seemed so human.”
He told me that he used A.I. “as a brainstorm” but never for writing itself. “I limit myself, for sure.” Eddie works for Los Angeles County, and he was talking to me during a break. He admitted that, when he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to do things ethically, but if I’m rushing to work I don’t feel bad about that,” particularly for courses outside his major.
I recognized Eddie’s conflict. I’ve used ChatGPT a handful of times, and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency. I’ve felt the need to stop myself from indulging in idle queries. Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager. And for students who use it the way Eddie did, as a kind of sounding board, there’s no clear threshold where the work ceases to be an original piece of thinking. In April, Anthropic, the company behind Claude, released a report drawn from a million anonymized student conversations with its chatbots. It suggested that more than half of user interactions could be classified as “collaborative,” involving a dialogue between student and A.I. (Presumably, the rest of the interactions were more extractive.)
May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes. “I don’t think it’s made me a worse writer,” she said. “It’s perhaps made me a less patient writer. I used to spend hours writing essays, nitpicking over my wording, really thinking about how to phrase things.” College had made her reflect on her experience at an extremely competitive high school, where she had received top grades but retained very little knowledge. As a result, she was the rare student who found college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen her engagement with the courses she felt passionate about. “I was trying to think, Where’s all this time going?” she said. I had never envied a college student until she told me the answer: “I sleep more now.”
Harry Stecopoulos oversees the University of Iowa’s English department, which has more than eight hundred majors. On the first day of his introductory course, he asks students to write by hand a two-hundred-word analysis of the opening paragraph of Ralph Ellison’s “Invisible Man.” There are always a few grumbles, and students have occasionally walked out. “I like the exercise as a tone-setter, because it stresses their writing,” he told me.
The return of blue-book exams might disadvantage students who were encouraged to master typing at a young age. Once you’ve grown accustomed to the smooth rhythms of typing, reverting to a pen and paper can feel stifling. But neuroscientists have found that the “embodied experience” of writing by hand taps into parts of the brain that typing does not. Being able to write one way—even if it’s more efficient—doesn’t make the other way obsolete. There’s something lofty about Stecopoulos’s opening-day exercise. But there’s another reason for it: the handwritten paragraph also begins a paper trail, attesting to voice and style, that a teaching assistant can consult if a suspicious paper is submitted.
Kevin, a third-year student at Syracuse University, recalled that, on the first day of a class, the professor had asked everyone to compose some thoughts by hand. “That brought a smile to my face,” Kevin said. “The other kids are scratching their necks and sweating, and I’m, like, This is kind of nice.”
Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I. to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said. Like many former holdouts, Kevin felt that his judicious use of A.I. was more defensible than his peers’ use of it.
As ChatGPT begins to sound more human, will we reconsider what it means to sound like ourselves? Kevin and some of his friends pride themselves on having an ear attuned to A.I.-generated text. The hallmarks, he said, include a preponderance of em dashes and a voice that feels blandly objective. An acquaintance had run an essay that she had written herself through a detector, because she worried that she was starting to phrase things like ChatGPT did. He read her essay: “I realized, like, It does kind of sound like ChatGPT. It was freaking me out a little bit.”
A particularly disarming aspect of ChatGPT is that, if you point out a mistake, it communicates in the backpedalling tone of a contrite student. (“Apologies for the earlier confusion. . . .”) Its mistakes are often referred to as hallucinations, a description that seems to anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant. Some professors told me that they had students fact-check ChatGPT’s work, as a way of discussing the importance of original research and of showing the machine’s fallibility. Hallucination rates have grown worse for most A.I.s, with no single reason for the increase. As a researcher told the Times, “We still don’t know how these models work exactly.”
But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs. Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said. Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”
Kevin, by contrast, desired a more general kind of moral distinction. I asked if he would be bothered to receive a lower grade on an essay than a classmate who’d used ChatGPT. “Part of me is able to compartmentalize and not be pissed about it,” he said. “I developed myself as a human. I can have a superiority complex about it. I learned more.” He smiled. But then he continued, “Part of me can also be, like, This is so unfair. I would have loved to hang out with my friends more. What did I gain? I made my life harder for all that time.”
In my conversations, just as college students invariably thought of ChatGPT as merely another tool, people older than forty focussed on its effects, drawing a comparison to G.P.S. and the erosion of our relationship to space. The London cabdrivers rigorously trained in “the knowledge” famously developed abnormally large posterior hippocampi, the part of the brain crucial for long-term memory and spatial awareness. And yet, in the end, most people would probably rather have swifter travel than sharper memories. What is worth preserving, and what do we feel comfortable off-loading in the name of efficiency?
What if we take seriously the idea that A.I. assistance can accelerate learning—that students today are arriving at their destinations faster? In 2023, researchers at Harvard introduced a self-paced A.I. tutor in a popular physics course. Students who used the A.I. tutor reported higher levels of engagement and motivation and did better on a test than those who were learning from a professor. May, the Georgetown student, told me that she often has ChatGPT produce extra practice questions when she’s studying for a test. Could A.I. be here not to destroy education but to revolutionize it? Barry Lam teaches in the philosophy department at the University of California, Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies philosophical modes of inquiry to everyday topics. He began wondering what it would mean for A.I. to actually be a productivity tool. He spoke to me from the podcast studio he built in his shed. “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand? If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.
“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.
Lam doesn’t find the use of A.I. morally indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued, because there’s technically no original version. Rather, he finds it a potential waste of everyone’s time. At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”
Nobody gets into teaching because he loves grading papers. I talked to one professor who rhapsodized about how much more his students were learning now that he’d replaced essays with short exams. I asked if he missed marking up essays. He laughed and said, “No comment.” An undergraduate at Northeastern University recently accused a professor of using A.I. to create course materials; she filed a formal complaint with the school, requesting a refund for some of her tuition. The dustup laid bare the tension between why many people go to college and why professors teach. Students are raised to understand achievement as something discrete and measurable, but when they arrive at college there are people like me, imploring them to wrestle with difficulty and abstraction. Worse yet, they are told that grades don’t matter as much as they did when they were trying to get into college—only, by this point, students are wired to find the most efficient path possible to good marks.
As the craft of writing is degraded by A.I., original writing has become a valuable resource for training language models. Earlier this year, a company called Catalyst Research Alliance advertised “academic speech data and student papers” from two research studies run in the late nineties and mid-two-thousands at the University of Michigan. The school asked the company to halt its work—the data was available for free to academics anyway—and a university spokesperson said that student data “was not and has never been for sale.” But the situation did lead many people to wonder whether institutions would begin viewing original student work as a potential revenue stream.
According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined since 2012. An assessment of tens of thousands of adults in nearly thirty countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.E.C.D., hypothesized that the way we consume information today—often through short social-media posts—has something to do with the decline in literacy. (One of Europe’s top performers in the assessment was Estonia, which recently announced that it will bring A.I. to some high-school students in the next few years, sidelining written essays and rote homework exercises in favor of self-directed learning and oral exams.)
Lam, the philosophy professor, used to be a colleague of mine, and for a brief time we were also neighbors. I’d occasionally look out the window and see him building a fence, or gardening. He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced that there is value to learning how to do things the annoying, old-fashioned, and—as he puts it—“artisanal” way. He told me that his wife, Shanna Andrawis, who has been a high-school teacher since 2008, frequently disagreed with his cavalier methods for dealing with large learning models. Andrawis argues that dishonesty has always been an issue. “We are trying to mass educate,” she said, meaning there’s less room to be precious about the pedagogical process. “I don’t have conversations with students about ‘artisanal’ writing. But I have conversations with them about our relationship. Respect me enough to give me your authentic voice, even if you don’t think it’s that great. It’s O.K. I want to meet you where you’re at.”
Ultimately, Andrawis was less fearful of ChatGPT than of the broader conditions of being young these days. Her students have grown increasingly introverted, staring at their phones with little desire to “practice getting over that awkwardness” that defines teen life, as she put it. A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s “a little cherry on top of an already really bad ice-cream sundae,” she said.
When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.
Except for a year spent bumming around my home town, I’ve basically been on a campus for the past thirty years. Students these days view college as consumers, in ways that never would have occurred to me when I was their age. They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking. Although I’ve empathized with my students’ various mini-dramas, I rarely project myself into their lives. I notice them noticing one another, and I let the mysteries of their lives go. Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.
Education, particularly in the humanities, rests on a belief that, alongside the practical things students might retain, some arcane idea mentioned in passing might take root in their mind, blossoming years in the future. A.I. allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human. I often tell my students that this is the last time in their lives that someone will have to read something they write, so they might as well tell me what they actually think.
Despite all the current hysteria around students cheating, they aren’t the ones to blame. They did not lobby for the introduction of laptops when they were in elementary school, and it’s not their fault that they had to go to school on Zoom during the pandemic. They didn’t create the A.I. tools, nor were they at the forefront of hyping technological innovation. They were just early adopters, trying to outwit the system at a time when doing so has never been so easy. And they have no more control than the rest of us. Perhaps they sense this powerlessness even more acutely than I do. One moment, they are being told to learn to code; the next, it turns out employers are looking for the kind of “soft skills” one might learn as an English or a philosophy major. In February, a labor report from the Federal Reserve Bank of New York reported that computer-science majors had a higher unemployment rate than ethnic-studies majors did—the result, some believed, of A.I. automating entry-level coding jobs.
None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.
When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village. He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus.
360 notes
·
View notes
Text
A scholarly bibliography of Design Fiction, AI and the news, 2025
ACKNOWLEDGMENTS
The work was funded by the Helsingin Sanomat Foundation and the Kone Foundation. We thank all the ideation workshop participants.
REFERENCES
[1] Naseem Ahmadpour, Sonja Pedell, Angeline Mayasari, and Jeanie Beh. 2019. Co-creating and Assessing Future Wellbeing Technology Using Design
Fiction. She Ji 5, 3 (2019), 209 230. DOI:https://doi.org/10.1016/j.sheji.2019.08.003
[2] ArtefactGroup. The Tarot Cards of Tech. Retrieved August 10, 2024 from https://tarotcardsoftech.artefactgroup.com
[3] Reuben Binns. 2018. Algorithmic Accountability and Public Reason. Philos. Technol. 31, 4 (December 2018), 543 556.
DOI:https://doi.org/10.1007/S13347-017-0263-5/METRICS
[4] Julian Bleecker. 2009. Design Fiction: A Short Essay on Design, Science, Fact and Fiction. Retrieved January 9, 2020 from
http://drbfw5wfjlxon.cloudfront.net/writing/DesignFiction_WebEdition.pdf
[5] [6] Julian Bleecker, Nick Foster, Fabien Girardin, and Nicolas Nova. 2022. The Manual of Design Fiction.
Mark Blythe. 2014. Research through design fiction: Narrative in real and imaginary abstracts. Conf. Hum. Factors Comput. Syst. - Proc. (2014), 703
712. DOI:https://doi.org/10.1145/2556288.2557098
[7] Mark Blythe and Enrique Encinas. 2016. The co-ordinates of design fiction: Extrapolation, irony, ambiguity and magic. Proc. Int. ACM Siggr. Conf.
Support. Gr. Work 13-16-Nove, (2016), 345 354. DOI:https://doi.org/10.1145/2957276.2957299
[8] J. Broekens, M. Heerink, and H. Rosendal. 2009. Assistive social robots in elderly care: a review. Gerontechnology 8, 2 (2009).
DOI:https://doi.org/10.4017/gt.2009.08.02.002.00
[9] Kevin Matthe Caramancion. 2023. News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and
Bard in News Fact-Checking. Proc. - 2023 IEEE Futur. Networks World Forum Futur. Networks Imagining Netw. Futur. FNWF 2023 (2023).
DOI:https://doi.org/10.1109/FNWF58287.2023.10520446
[10] Mia Carbone, Stuart Soroka, and Johanna Dunaway. 2024. The Psychophysiology of News Avoidance: Does Negative Affect Drive Both Attention and Inattention to News? Journal. Stud. (September 2024). DOI:https://doi.org/10.1080/1461670X.2024.2310672
[11] John M. Carroll. 1997. Human computer interaction: psychology as a science of design. Int. J. Hum. Comput. Stud. 46, 4 (April 1997), 501 522.
DOI:https://doi.org/10.1006/IJHC.1996.0101
[12] Mark Chignell, Lu Wang, Atefeh Zare, and Jamy Li. 2023. The Evolution of HCI and Human Factors: Integrating Human and Artificial Intelligence.
ACM Trans. Comput. Interact. 30, 2 (March 2023). DOI:https://doi.org/10.1145/3557891/ASSET/0C373876-F5A8-40E8-A46E-
273567CE2001/ASSETS/GRAPHIC/TOCHI-2021-0178-F03.JPG
[13] Justin Clark, Robert Faris, Urs Gasser, Adam Holland, Hilary Ross, and Casey Tilton. 2019. Content and Conduct: How English Wikipedia Moderates
Harmful Speech. Retrieved September 11, 2024 from https://papers.ssrn.com/abstract=3489176
[14] Marios Constantinides, John Dowell, David Johnson, and Sylvain Malacria. 2015. Exploring mobile news reading interactions for news app
personalisation. MobileHCI 2015 - Proc. 17th Int. Conf. Human-Computer Interact. with Mob. Devices Serv. (August 2015), 457 462.
DOI:https://doi.org/10.1145/2785830.2785860/SUPPL_FILE/P457-CONSTANTINIDES-SUPPL.ZIP
[15] Henry Kudzanai Dambanemuya and Nicholas Diakopoulos. 2021. Auditing the Information Quality of News-Related Queries on the Alexa Voice
Assistant. Proc. ACM Human-Computer Interact. 5, CSCW1 (April 2021). DOI:https://doi.org/10.1145/3449157
[16] [17] [18] Nicholas Diakopoulos. 2019. Automating the news: how algorithms are rewriting the media. (2019), 326.
Carl DiSalvo. 2012. Adversarial design as inquiry and practice. MIT Press.
Abraham Doris-Down, Husayn Versee, and Eric Gilbert. 2013. Political blend: An application designed to bring people together based on political
differences. ACM Int. Conf. Proceeding Ser. (2013), 120 130. DOI:https://doi.org/10.1145/2482991.2483002
[19] Konstantin Nicholas Dörr. 2016. Mapping the field of Algorithmic Journalism. Digit. Journal. 4, 6 (2016), 700 722.
DOI:https://doi.org/10.1080/21670811.2015.1096748
[20] Konstantin Nicholas Dörr and Katharina Hollnbuchner. 2017. Ethical Challenges of Algorithmic Journalism. Digit. Journal. 5, 4 (April 2017), 404
419. DOI:https://doi.org/10.1080/21670811.2016.1167612
[21] Tomislav Duricic, Dominik Kowald, Emanuel Lacic, and Elisabeth Lex. 2023. Beyond-accuracy: a review on diversity, serendipity, and fairness in
recommender systems based on graph neural networks. Front. Big Data 6, (December 2023), 1251072.
DOI:https://doi.org/10.3389/FDATA.2023.1251072/BIBTEX
[22] Seth Flaxman, Sharad Goel, Justin M Rao, David Blei, Ceren Budak, Susan Dumais, Andrew Gelman, Dan Goldstein, Matt Salganik, Tim Wu, and
Georgios Zervas. 2016. Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opin. Q. 80, S1 (January 2016), 298 320.
DOI:https://doi.org/10.1093/POQ/NFW006
[23] [24] Richard Fletcher and R. Nielsen. 2024. What does the public in six countries think of generative AI in news?
Terry Flew, Christina Spurgeon, Anna Daniel, and Adam Swift. 2012. The Promise of Computational Journalism. Journal. Pract. 6, 2 (2012), 157 171.
DOI:https://doi.org/10.1080/17512786.2011.616655
[25] Julian De Freitas, Stuti Agarwal, Bernd Schmitt, and Nick Haslam. 2023. Psychological factors underlying attitudes toward AI tools. Nat. Hum. Behav.
242023 711 7, 11 (November 2023), 1845 1854. DOI:https://doi.org/10.1038/s41562-023-01734-2
[26] Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. 2023. Generative AI and ChatGPT: Applications, challenges, and
AI-human collaboration. J. Inf. Technol. Case Appl. Res. 25, 3 (July 2023), 277 304. DOI:https://doi.org/10.1080/15228053.2023.2233814
[27] Fuse. 2024. Fuse - Personalized News. Retrieved August 10, 2024 from https://pageone.livesemantics.com/
[28] hary Kenton, Mikel
Rodriguez, Seliem El-Sayed, Sasha Brown, Canfer Akbulut, Andrew Trask, Edward Hughes, and Et Al. 2024. The Ethics of Advanced AI Assistants.
(April 2024), 2024 2028. Retrieved September 9, 2024 from https://arxiv.org/abs/2404.16244v2
[29] Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, and Xiaojun Wan. 2024. LLM-based NLG Evaluation: Current Status and Challenges. (February 2024).
Retrieved September 8, 2024 from https://arxiv.org/abs/2402.01383v2
[30] William W Gaver, Peter Gall Krogh, and Andy Boucher. 2022. Emergence as a Feature of Practice-based Design Research. In Designing Interactive
, 517 526.
[31] Sabine Geers. 2020. News Consumption across Media Platforms and Content: A Typology of Young News Users. Public Opin. Q. 84, S1 (August 2020),
332 354. DOI:https://doi.org/10.1093/POQ/NFAA010
[32] Nicole Gillespie, STeven Lockey, Caitlin Curtis, Javad Pool, and Ali Akbari. 2023. Trust in Artificial Intelligence: Meta-Analytic Findings. Univ.
Queensl. KPMG Aust. 10, (2023). DOI:https://doi.org/10.14264/00d3c94
[33] [34] GroundNews. 2024. Ground News. Retrieved August 3, 2024 from https://ground.news/about
Michael M. Grynbaum and Ryan Mac. 2023. The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work. The New York Times.
Retrieved January 15, 2024 from https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit
[35] Derek Hales. 2013. Design fictions an introduction and provisional taxonomy. Digit. Creat. 24, 1 (March 2013), 1 10.
DOI:https://doi.org/10.1080/14626268.2013.769453
[36] Vikas Hassija, Vinay Chamola, Atmesh Mahapatra, Abhinandan Singal, Divyansh Goel, Kaizhu Huang, Simone Scardapane, Indro Spinelli, Mufti Mahmud, and Amir Hussain. 2024. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognit. Comput. 16, 1 (January
2024), 45 74. DOI:https://doi.org/10.1007/S12559-023-10179-8/FIGURES/14
[37] Michael Townsen Hicks, James Humphries, and Joe Slater. 2024. ChatGPT is bullshit. Ethics Inf. Technol. 26, 2 (June 2024), 1 10.
DOI:https://doi.org/10.1007/S10676-024-09775-5/METRICS
[38] Lennart Hofeditz, Milad Mirbabaie, Jasmin Holstein, and Stefan Stieglitz. 2021. Do You Trust an AI-Journalist? A Credibility Analysis of News
Content with AI-Authorship. ECIS (2021), 6 14. Retrieved from https://aisel.aisnet.org/ecis2021_rp/50
[39] Naja Holten Holten Møller, Trine Rask Nielsen, and Christopher Le Dantec. 2021. Work of the Unemployed. DIS 2021 - Proc. 2021 ACM Des. Interact.
Syst. Conf. Nowhere Everywhere (June 2021), 438 448. DOI:https://doi.org/10.1145/3461778.3462003/ASSETS/HTML/IMAGES/IMAGE3.JPG
[40] Avery E. Holton and Hsiang Iris Chyi. 2012. News and the Overloaded Consumer: Factors Influencing Information Overload Among News
Consumers. https://home.liebertpub.com/cyber 15, 11 (November 2012), 619 624. DOI:https://doi.org/10.1089/CYBER.2011.0610
[41] Chenyan Jia, Martin J. Riedl, and Samuel Woolley. 2024. Promises and Perils of Automated Journalism: Algorithms, Experimentat
Journal. Stud. 25, 1 (January 2024), 38 57. DOI:https://doi.org/10.1080/1461670X.2023.2289881
[42] Sangyeon Kim, Insil Huh, and Sangwon Lee. 2022. No Movie to Watch: A Design Strategy for Enhancing Content Diversity through Social
Recommendation in the Subscription-Video-On-Demand Service. Appl. Sci. 2023, Vol. 13, Page 279 13, 1 (December 2022), 279.
DOI:https://doi.org/10.3390/APP13010279
[43] Joel Kiskola, Thomas Olsson, Heli Väätäjä, Aleksi H. Syrjämäki, Anna Rantasila, Poika Isokoski, Mirja Ilves, and Veikko Surakka. 2021. Applying critical voice in design of user interfaces for supporting self-reflection and emotion regulation in online news commenting. In of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery.
DOI:https://doi.org/10.1145/3411764.3445783
[44] n with Black Mirror.
SIGCSE 2022 - Proc. 53rd ACM Tech. Symp. Comput. Sci. Educ. 1, (February 2022), 836 842. DOI:https://doi.org/10.1145/3478431.3499308
[45] Tomoko Komatsu, Marisela Gutierrez Lopez, Stephann Makri, Colin Porlezza, Glenda Cooper, Andrew MacFarlane, and Sondess Missaoui. 2020. AI
should embody our values: Investigating journalistic values to inform AI technology design. ACM Int. Conf. Proceeding Ser. (October 2020).
DOI:https://doi.org/10.1145/3419249.3420105
[46] Peter Gall Krogh, Thomas Markussen, and Anne Louise Bang. 2015. Ways of drifting Five methods of experimentation in research through design.
Smart Innov. Syst. Technol. 34, (2015), 39 50. DOI:https://doi.org/10.1007/978-81-322-2232-3_4/TABLES/1
[47] Shaun Lawson, Ben Kirman, Conor Linehan, Tom Feltwell, and Lisa Hopkins. 2015. Problematising Upstream Technology through Speculative
Design: The Case of Quantified Cats and Dogs. DOI:https://doi.org/10.1145/2702123.2702260
[48] Hao Ping Lee, Yu Ju Yang, Thomas Serban von Davier, Jodi Forlizzi, and Sauvik Das. 2024. Deepfakes, Phrenology, Surveillance, and More! A
Taxonomy of AI Privacy Risks. Conf. Hum. Factors Comput. Syst. - Proc. (May 2024).
DOI:https://doi.org/10.1145/3613904.3642116/SUPPL_FILE/PN8548-SUPPLEMENTAL-MATERIAL-1.XLSX
[49] Sunok Lee, Minha Lee, and Sangsu Lee. 2023. What If Artificial Intelligence Become Completely Ambient in Our Daily Lives? Exploring Future
Human-AI Interaction through High Fidelity Illustrations. Int. J. Hum. Comput. Interact. 39, 7 (2023), 1371 1389.
DOI:https://doi.org/10.1080/10447318.2022.2080155
[50] rceptions of
Generative Artificial Intelligence. Conf. Hum. Factors Comput. Syst. - Proc. 18, 24 (May 2024).
25DOI:https://doi.org/10.1145/3613904.3642114/SUPPL_FILE/PN9381-SUPPLEMENTAL-MATERIAL-2.PDF
[51] Sixian Li, Alessandro M. Peluso, and Jinyun Duan. 2023. Why do we prefer humans to artificial intelligence in telemarketing? A mind perception
explanation. J. Retail. Consum. Serv. 70, (January 2023), 103139. DOI:https://doi.org/10.1016/J.JRETCONSER.2022.103139
[52] Joseph Lindley and Paul Coulton. 2015. Back to the future: 10 years of design fiction. ACM Int. Conf. Proceeding Ser. (2015), 210 211.
DOI:https://doi.org/10.1145/2783446.2783592
[53] Comput. Hum. Behav. Artif. Humans 2, 1
(January 2024), 100054. DOI:https://doi.org/10.1016/J.CHBAH.2024.100054
[54] [55] Listen2.AI. 2024. Listen2.AI. Retrieved August 7, 2024 from https://listen2.ai/
Andrés Lucero and Juha Arrasvuori. 2010. PLEX Cards: A source of inspiration when designing for playfulness. ACM Int. Conf. Proceeding Ser. (2010),
28 37. DOI:https://doi.org/10.1145/1823818.1823821
[56] Thomas Markussen and Eva Knutz. 2013. The poetics of design fiction. Proc. 6th Int. Conf. Des. Pleasurable Prod. Interfaces, DPPI 2013 (2013), 231
240. DOI:https://doi.org/10.1145/2513506.2513531
[57] Suvodeep Misra, Debayan Dhar, and Sukumar Nandi. 2023. Design Fiction: A Way to Foresee the Future of Human Computer Interaction Design
Challenges. Smart Innov. Syst. Technol. 343, (2023), 809 822. DOI:https://doi.org/10.1007/978-981-99-0293-4_65
[58] Rachel E. Moran and Sonia Jawaid Shaikh. 2022. Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial
Intelligence in Journalism. Digit. Journal. 10, 10 (November 2022), 1756 1774. DOI:https://doi.org/10.1080/21670811.2022.2085129
[59] Victoria Moreno-Gil, Xavier Ramon-Vegas, Ruth Rodríguez-Martínez, and Marcel Mauri-Ríos. 2023. Explanatory Journalism within European Fact
Checking Platforms: An Ally against Disinformation in the Post-COVID-19 Era. Soc. 2023, Vol. 13, Page 237 13, 11 (November 2023), 237.
DOI:https://doi.org/10.3390/SOC13110237
[60] Sean A. Munson, Stephanie Y. Lee, and Paul Resnick. 2013. Encouraging Reading of Diverse Political Viewpoints with a Browser Widget. Proc. Int.
AAAI Conf. Web Soc. Media 7, 1 (2013), 419 428. DOI:https://doi.org/10.1609/ICWSM.V7I1.14429
[61] Kevin P. Murphy. 2023. Probabilistic machine learning: Advanced topics. MIT Press.
[62] Nic Newman, Richard Fletcher, Craig T. Robertson, A. Ross Arguedas, and Rasmus Kleis Nielsen. 2024. Reuters Institute digital news report 2024.
[63] Safiya Umoja Noble. 2020. Algorithms of Oppression. Algorithms of Oppression (December 2020).
DOI:https://doi.org/10.18574/NYU/9781479833641.001.0001
[64] Donald Norman. 2024. Design for a Better World: Meaningful, Sustainable, Humanity Centered. MIT Press.
[65] treme Right
and Online Recommender Systems. Soc. Sci. Comput. Rev. 33, 4 (August 2015), 459 478.
DOI:https://doi.org/10.1177/0894439314555329/ASSET/IMAGES/LARGE/10.1177_0894439314555329-FIG5.JPEG
[66] Andreas L. Opdahl, Bjørnar Tessem, Duc Tien Dang-Nguyen, Enrico Motta, Vinay Setty, Eivind Throndsen, Are Tverberg, and Christoph Trattner.
2023. Trustworthy journalism through AI. Data Knowl. Eng. 146, April (2023), 102182. DOI:https://doi.org/10.1016/j.datak.2023.102182
[67] Sharon Oviatt. 2006. Human-centered design meets cognitive load theory: Designing interfaces that help people think. Proc. 14th Annu. ACM Int.
Conf. Multimedia, MM 2006 (2006), 871 880. DOI:https://doi.org/10.1145/1180639.1180831
[68] Ozlem Ozmen Garibay, Brent Winslow, Salvatore Andolina, Margherita Antona, Anja Bodenschatz, Constantinos Coursaris, Gregory Falco, Stephen
M. Fiore, Ivan Garibay, Keri Grieman, John C. Havens, Marina Jirotka, Hernisa Kacorri, Waldemar Karwowski, Joe Kider, Joseph Konstan, Sean
Koon, Monica Lopez-Gonzalez, Iliana Maifeld-Carucci, Sean McGregor, Gavriel Salvendy, Ben Shneiderman, Constantine Stephanidis, Christina
Strobel, Carolyn Ten Holter, and Wei Xu. 2023. Six Human-Centered Artificial Intelligence Grand Challenges. Int. J. Human Computer Interact. 39,
3 (2023), 391 437. DOI:https://doi.org/10.1080/10447318.2022.2153320
[69] Sumit Pahwa and Nusrat Khan. 2022. Factors Affecting Emotional Resilience in Adults. Manag. Labour Stud. 47, 2 (May 2022), 216 232.
DOI:https://doi.org/10.1177/0258042X211072935/ASSET/IMAGES/LARGE/10.1177_0258042X211072935-FIG1.JPEG
[70] Rock Yuren Pang, Sebastin Santy, René Just, and Katharina Reinecke. 2024. BLIP: Facilitating the Exploration of Undesirable Consequences of Digital
Technologies. Conf. Hum. Factors Comput. Syst. - Proc. (May 2024). DOI:https://doi.org/10.1145/3613904.3642054/SUPPL_FILE/PN1128-
SUPPLEMENTAL-MATERIAL-1.PDF
[71]
[72] Jonathan Perry. 2021. Trust in Public Institutions: Trends and Implications for Economic Security. Aff. United Nations Dep. Econ. Soc. (July 2021).
DOI:https://doi.org/10.18356/27081990-108
[73] James Pierce. 2021. In tension with progression: Grasping the frictional tendencies of speculative, critical, and other alternative designs. In Conference
on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery. DOI:https://doi.org/10.1145/3411764.3445406
[74] Amanda Ramsälv, Mats Ekström, and Oscar Westlund. 2023. The epistemologies of data journalism. https://doi.org/10.1177/14614448221150439
(January 2023). DOI:https://doi.org/10.1177/14614448221150439
[75] Jeba Rezwana and Mary Lou Maher. 2023. User Perspectives on Ethical Challenges in Human-AI Co-Creativity: A Design Fiction Study. ACM Int.
Conf. Proceeding Ser. (2023), 62 74. DOI:https://doi.org/10.1145/3591196.3593364
[76] Francesco Ricci, Lior Rokach, and Bracha Shapira. 2022. Recommender Systems Handbook: Third Edition. Recomm. Syst. Handb. Third Ed. (January
2022), 1 1060. DOI:https://doi.org/10.1007/978-1-0716-2197-4
[77] Ronda Ringfort-Felner, Robin Neuhaus, Judith Dörrenbächer, Sabrina Großkopp, Dimitra Theofanou-fuelbier, and Marc Hassenzahl. 2023. Design
Fiction in a Corporate Setting a Case Study. In 14, 2023, 2093 2108.
DOI:https://doi.org/10.1145/3563657.3596126
26[78] Francisco Javier Rodrigo-Ginés, Jorge Carrillo-de-Albornoz, and Laura Plaza. 2024. A systematic review on media bias detection: What is media bias,
how it is expressed, and how to detect it. Expert Syst. Appl. 237, (March 2024), 121641. DOI:https://doi.org/10.1016/J.ESWA.2023.121641
[79] Lambèr Royakkers, Jelte Timmer, Linda Kool, and Rinie van Est. 2018. Societal and ethical issues of digitization. Ethics Inf. Technol. 20, 2 (June 2018),
127 142. DOI:https://doi.org/10.1007/S10676-018-9452-X/TABLES/1
[80] Alan M. Rubin, Elizabeth M. Perse, and Robert A. Powell. 1985. Loneliness, Parasocial Interaction, And Local Television News Viewing. Hum.
Commun. Res. 12, 2 (December 1985), 155 180. DOI:https://doi.org/10.1111/J.1468-2958.1985.TB00071.X
[81] Henrik Rydenfelt. 2022. Transforming media agency? Approaches to automation in Finnish legacy media. New Media Soc. 24, 12 (March 2022), 2598
2613. DOI:https://doi.org/10.1177/1461444821998705
[82] Henrik Rydenfelt, Lauri Haapanen, Jesse Haapoja, and Tuukka Lehtiniemi. 2024. Personalisation in Journalism: Ethical insights and blindspots in
Finnish legacy media. Journalism 25, 2 (November 2024), 313 333. DOI:https://doi.org/10.1177/14648849221138424
[83] Henrik Rydenfelt, Tuukka Lehtiniemi, Jesse Haapoja, and Lauri Haapanen. 2025. Autonomy and Algorithms: Tracing the Significance of Content
Personalization. Int. J. Commun. 19, 0 (January 2025), 20. Retrieved January 27, 2025 from https://ijoc.org/index.php/ijoc/article/view/23474
[84] Aljosha Karim Schapals, Colin Porlezza, and Rodrigo Zamith. 2020. Assistance or Resistance? Evaluating the Intersection of Automated Journalism
and Journalistic Role Conceptions. Media Commun. 8, 3 (July 2020), 16 26. DOI:https://doi.org/10.17645/MAC.V8I3.3054
[85] Jordan Richard Schoenherr, Roba Abbas, Katina Michael, Pablo Rivas, and Theresa Dirndorfer Anderson. 2023. Designing AI Using a Human-
Centered Approach: Explainability and Accuracy Toward Trustworthiness. IEEE Trans. Technol. Soc. 4, 1 (March 2023), 9 23.
DOI:https://doi.org/10.1109/TTS.2023.3257627
[86] Rifat Ara Shams, · Didar Zowghi, and · Muneera Bano. 2023. AI and the quest for diversity and inclusion: a systematic literature review. AI Ethics
2023 (November 2023), 1 28. DOI:https://doi.org/10.1007/S43681-023-00362-W
[87] Donghee Shin and Shuhua Zhou. 2024. A Value and Diversity-Aware News Recommendation Systems: Can Algorithmic Gatekeeping Nudge Readers
to View Diverse News? Journal. Mass Commun. Q. (June 2024).
DOI:https://doi.org/10.1177/10776990241246680/ASSET/IMAGES/LARGE/10.1177_10776990241246680-FIG3.JPEG
[88] Felix M. Simon. 2024. Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena. Columbia
Journalism Review. Retrieved August 29, 2024 from https://www.cjr.org/tow_center_reports/artificial-intelligence-in-the-news.php
[89] Marie Louise Juul Søndergaard and Lone Koefoed Hansen. 2018. Intimate futures: Staying with the trouble of digital personal assistants through
design fiction. DIS 2018 - Proc. 2018 Des. Interact. Syst. Conf. (June 2018), 869 880.
DOI:https://doi.org/10.1145/3196709.3196766/SUPPL_FILE/DISFP430.MP4
[90] Catherine Sotirakou and Constantinos Mourlas. 2016. A Gamified News Application for Mobile Devices: An Approach that Turns Digital News
Readers into Players of a Social Network. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 9599, (2016),
480 493. DOI:https://doi.org/10.1007/978-3-319-40216-1_53
[91] [92] Bruce Sterling. 2005. Shaping Things. MIT Press.
Miriam Sturdee, Paul Coulton, Joseph G. Lindley, Mike Stead, Haider Ali Akmal, and Andy Hudson-Smith. 2016. Design fiction: How to build a
voight-kampff machine. Conf. Hum. Factors Comput. Syst. - Proc. 07-12-May-, (May 2016), 375 385. DOI:https://doi.org/10.1145/2851581.2892574
[93] Edson C. Tandoc and Soo Kwang Oh. 2017. Small Departures, Big Continuities? Journal. Stud. 18, 8 (August 2017), 997 1015.
DOI:https://doi.org/10.1080/1461670X.2015.1104260
[94] Neil Thurman, Seth C. Lewis, and Jessica Kunert. 2019. Algorithms, Automation, and News. Digit. Journal. 7, 8 (2019), 980 992.
DOI:https://doi.org/10.1080/21670811.2019.1685395
[95] Tamás Tóth, Manuel Goyanes, Márton Demeter, and Francisco Campos-Freire. 2022. Social Implications of Paywalls in a Polarized Society:
Stud. Big Data 97, (2022), 169 179. DOI:https://doi.org/10.1007/978-3-030-
88028-6_13
[96] Tommaso Turchi, Alessio Malizia, and Simone Borsci. 2024. Reflecting on Algorithmic Bias With Design Fiction: The MiniCoDe Workshops. IEEE
Intell. Syst. 39, 2 (March 2024), 40 50. DOI:https://doi.org/10.1109/MIS.2024.3352977
[97] tember 11, 2024
from https://books.google.com/books/about/Antisocial_Media.html?id=h05WDwAAQBAJ
[98] [99] Stephen J. Ward. 2019. Journalism ethics. In The handbook of journalism studies. Taylor & Francis, 307 323.
Stephen John Anthony Ward. 2015. The invention of journalism ethics: The path to objectivity and beyond. McGill- -MQUP.
[100] mated journalism.
Journalism 22, 1 (January 2021), 86 103. DOI:https://doi.org/10.1177/1464884918757072/ASSET/IMAGES/LARGE/10.1177_1464884918757072-
FIG1.JPEG
[101] Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce, and John Chuang. 2017. Eliciting values reflections by engaging privacy
futures using design workbooks. Proc. ACM Human-Computer Interact. 1, CSCW (2017). DOI:https://doi.org/10.1145/3134746
[102] Richmond Y Wong and Vera Khovanskaya. 2018. Speculative Design in HCI: From Corporate Imaginations to Critical Orientations. Comput. Interact.
2, (2018). DOI:https://doi.org/10.1007/978-3-319-73374-6_10
[103] Nan Yu and Jun Kong. 2016. User experience with web browsing on small screens: Experimental investigations of mobile-page interface design and
homepage design for news websites. Inf. Sci. (Ny). 330, (February 2016), 427 443. DOI:https://doi.org/10.1016/J.INS.2015.06.004
[104] Mi Zhou, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, and Kannan Srinivasan. 2024. Bias in Generative AI. (March 2024). Retrieved
January 23, 2025 from https://arxiv.org/abs/2403.02726v1
27[105] John Zimmerman and Jodi Forlizzi. 2014. Research through design in HCI. In Ways of Knowing in HCI, Judith S. Olson and Wendy A. Kellogg (eds.).
Springer New York, New York, 167 189. DOI:https://doi.org/10.1007/978-1-4939-0378-8_8
3 notes
·
View notes
Text
Like I'd say I miss English class bcos it was a dedicated space where theyd sit me down and I HAD to write a short story (or a personal essay but fuck thoseeeeeee) but also I notably despised English class. 'Lollll tumblr hated English class and now they have no reading comprehension' I wrote an essay about how the exam structures and emphasis on regurgitating templated answer stifles genuine understanding of the material and my pivotal metaphor was that of artificial intelligence as the platonic ideal of the student as it quickly recycles past information in textbook format and my teacher was like damn wow so true . And then a year later she showed us chatgpt and said 'you should base your personal essays off of this' and it was a struggle not to leave the room and walk into the ocean
#'youve told the chatgpt story before' WELL ITS STUCK IN MY NOGGIN!!!!#the assigned reading was room by emma donoghue because the teacher said it would be easiest and quickest#i had to fight to independently study dorian grey and answer on that instead#english curriculum is a bunch of bollocks. told the chatgpt story in my leaving cert bcos fuck em
8 notes
·
View notes
Text
Hot for AI
“Out of all the essays I’ve received from you [on turnitin.com],” said my AP English Language teacher, his face hidden behind a black monitor, “five of them have come up hot for AI.”
I’m a daydreamer in class. Hot for AI sounded like a science fiction/romance novel I’d be interested in reading.
‘No, dear!’ (I’m thinking something like ‘Roxanne AI’ is the name of a dancer in a 2025 themed club in Futureville, that the looks-too-good-for-thirty main character has fallen in love with, even though he’s chained to his old-timey dumb as rocks human wife), ‘You have to understand, this tech dame, materialized in our bedroom– she wanted something from me and I wanted something from her.’
But no, my teacher was talking about us using Artificial Intelligence to write our fifteen page semester papers. The process it took me to even start my paper is a subject for another essay, but yes, I did use AI for the rough draft, only to turn the thing in late because I missed the deadline. Good news, though! He didn’t detect AI in my paper because the rough drafts were to be turned in printed out, not on the website with the AI checker.
Bad news, though. My fuzzy teenage mind didn’t remember that we had to submit to turnitin.com for our final drafts. Uh-oh. Not only was my assignment late when I showed up to class with my edited physical copy that the teacher wouldn’t accept, but he was detecting AI in other students’ papers. I’d have to change everything that was AI generated so turnitin.com wouldn’t sound the alarms on my essay.
I looked through my rejected physical copy in class that day, trying to remember what was my own and what I had asked ChatGPT to write, and what AI sentences I had edited after turning in the AI rough draft. I didn’t ask the robot to generate the whole essay for me, or even to generate individual paragraphs. I asked it to help me list all the rhetorical strategies that George Orwell used in Politics and the English Language because it was 3:00 AM, I couldn’t think of anything else, and I needed an extra three pages. But it wasn’t my ideas, and that’s all that mattered. I made sure I agreed with what the AI said and could back it up with quotations, so everything became my ideas that I could summarize, rephrase, and understand, but too much still wasn’t my ideas when it was generated, and when I typed it out on the page.
I know I don’t sound like a good student to you right now. And I’m really not. I am not a good student. I hate my school. I have zero school spirit. I cannot wait to get the hell out of here. There is too much work, and no I don’t do it by choice. I take the classes I take because of my parents and my siblings, none of this is my choice. The study-or-die attitude of my school has infected my parents and siblings, who believe everything they hear and are intent on making me a miserable child so that I can be a rich adult– their words, not mine. My point is, if there is any shortcut, I will be the first to take it, intellectual capacity be damned. I don’t care very strongly about the academic community at my school. It’s making me miserable! That’s the whole point of these essays. I don’t have a choice, therefore, I do not care. So… what’s even the big deal about me using AI? I’ll get the grade I deserve, an F, if it’s detected, and if it’s not, then great. Whatever. I have better stuff to be doing than analyzing rhetorical strategies. I wanted to go to bed.
Well, the truth is I’m not wrong. But here’s the thing: Using AI to write your essays is playing into the game that education in America has become. What is that game? Well, simply put, it’s faking stuff. Your parents, if they’re anything like mine, want you to fake it. They don’t care if you manage to think critically. They wouldn’t want you to spend your summer with your creative outlets over an internship you will get nothing out of intellectually. Stretch the truth on your college applications because what you feel and what you enjoy doesn’t matter, it’s the grand things you did. No, you can’t try new things because the admissions officers care about the narrative you craft about who you are, and if it’s not a ‘coherent’ one, you are a failure of a person, of a daughter, a son, etc.
It’s easy to get lost in your own resentment of the system so much that you just turn in a fake paper. But, listen. Your English teacher hopefully wants your thoughts. That’s why they’ll give you an F if you turn in anything that’s not your thoughts. And when you turn in a fake essay, you’re actually doing exactly what the education system in America wants you to do. Be fake, don’t think, do it so that it looks good on the outside, it doesn’t matter whether or not you’re actually getting something out of it intellectually. If I asked my mother whether I should turn in my completely innocuous AI generated paper or start now writing for thirty minutes a day, working on the paper which is due in two weeks, but that this time would encroach upon the hours in which I would otherwise spend researching colleges or doing work for my internship, she would tell me to turn in the fake paper without a doubt. Why? Because she doesn’t care enough about English, or George Orwell, or whatever it is your teacher wanted to read your thoughts about. And if you’ve fallen into the habit of turning in AI generated papers, you probably don’t either.
I implore you to hate the education system in America so much that you force yourself to think for yourself. I also implore you to take those thirty minutes a day. College applications can wait.
3 notes
·
View notes
Text
Grammarly: The Power of Artificial Intelligence in Education
In the rapidly changing world of education, Artificial intelligence has emerged as a powerful tool to enhance learning. One of the most used applications of AI in education is Grammarly. From students to professionals, educators to writers, It has transformed the way we communicate by improving the quality of writing across various platforms. Through the integration of artificial intelligence and generative AI, This tool offers significant benefits for students and anyone who look to improve their writing.
What is Grammarly?
Grammarly is an online writing assistant that helps users to improve their writing. It doesn’t just correct spelling and grammatical errors, it improves the clarity and overall readability of the text. Whether you are writing an essay, an email, or a professional document, Grammarly provides suggestions to make your writing more effective.
Artificial Intelligence Behind Grammarly
The power of Grammarly lies in its advanced use of artificial intelligence. It is an AI algorithm that can analyze and understand language in real-time, offering writing recommendations that are not just about correctness but also about enhancing the clarity and flow of your ideas. Unlike basic spell-checkers, This tool uses artificial intelligence to understand the context of a sentence, providing suggestions that are highly relevant and personalized. This is where generative AI plays a crucial role.
Benefits for Students Using Grammarly:
Students constantly have work of writing essays, reports, and research papers, and the quality of their writing often directly affects their grades. Here’s how Grammarly can be a game-changer for them:
Enhanced Writing Quality: One of the benefits for students is that this tool helps them write more clearly and effectively. Providing real-time feedback on grammar, punctuation, and word choice, it ensures that students can submit well polished work.
Learning Tool: Grammarly serves as a learning assistant. It explains why certain changes are recommended, allowing students to understand their mistakes and improve their writing skills over time. This educational aspect makes this tool particularly useful in their learning.
Saves Time: Students have tight deadlines for long essays that are time-consuming. With Grammarly, students can quickly do their work through the tool and get instant suggestions for improvement. This saves time while still ensuring that the quality of the submission is high.
Plagiarism Detection: Grammarly offers a plagiarism detection feature that compares the text against billions of web pages to ensure that the content is unique. This is a critical benefit for students who need to ensure that their work is original and properly mentioned.
Confidence Booster: Writing can be troubling for many students, particularly those for whom English is not their first language. By using Grammarly, students can write with more confidence, knowing that the tool will help them communicate their ideas clearly and effectively.
The Role of Generative AI in Grammarly:
Generative AI has taken Grammarly to new heights by enabling it to offer grammatical corrections. With generative AI, the platform can suggest alternative ways to phrase sentences, improve the tone to match the condition, and even offer creative inputs that adjust with the writer’s aim. For example, when a user is writing an email, Grammarly may suggest a more professional or offer alternative word choices that better suit the message.
In education, generative AI enhances Grammarly’s capacity to help students think critically about their writing. It encourages them to explore different ways to carry their ideas, which in turn helps them develop stronger writing skills. Generative AI also allows Grammarly to be more flexible, catering to a wide range of writing styles from papers to creative writing assignments.
Conclusion:
In the kingdom of education, Grammarly stands out as a powerful tool that controls the potential of artificial intelligence and generative AI. Its ability to provide real-time feedback makes it an essential resource for students, helping them improve their writing skills while saving time. The benefits for students enhanced writing quality, learning opportunities, time savings, and increased confidence.
2 notes
·
View notes
Text

Broken Victorian children, Way of All Flesh, Samuel Butler
A brief comment on the relationship between Psychology, Religion, and Terrorism. In the Victorian semi-autobiographical novel, The Way of All Flesh (full text), author Samuel Butler says,
"If their wills were well broken in childhood, to use an expression then much in vogue, they would acquire habits of obedience..." Once this psychological wound (the archeotrauma*) is created, its ongoing presence can easily be mistaken for the existence of a God.
When post-trauma psychological control consists of years of religious indoctrination then layers of repression accumulate behind the original wound.
If such a compounded trauma is deliberately corrupted at any point then it becomes a source of "anti-life" such as that Hamas used to commit atrocities in Israel on the 7th of October 2023.
The type of behaviour exhibited by the terrorists does not exist in the natural world (i.e. it is not transmitted from generation to generation by DNA).
The writer George Orwell praised The Way of All Flesh saying,
"A great book because it gives an honest picture of the relationship between father and son." A. A. Milne, author of Winnie-the-Pooh, wrote about it in his essay A Household Book, published in a collection of his essays, Not That It Matters: "Once upon a time I discovered Samuel Butler; not the other two, but the one who wrote The Way of All Flesh, the second-best novel in the English language. I say the second-best, so that, if you remind me of Tom Jones, or The Mayor of Casterbridge, or any other that you fancy, I can say, of course, that one is the best." In 1998, Random House's The Modern Library ranked The Way of All Flesh twelfth on its list of the 100 best English-language novels of the 20th century. A Crucifix (from the Latin cruci fixus meaning "(one) fixed to a cross") is an image of Jesus Christ on the Cross, as distinct from a bare Cross. The representation of Jesus himself on the Cross is referred to in English as the Corpus (Latin for "body").
*The archeotrauma (alt. archaeotrauma) is the psychological wound human beings, horses, and other animals sustain when their spirit is broken. Very common in Dover, UK.
Also see Evolution and Psychology Research. An AI (artificial intelligence) image.
#archaeotrauma#psychology#religion#atrocities#indoctrination#dover#samuel butler#victorian#child abuse#israel#hamas#god#brainwashing#childhood#culture#archeotrauma#ai art#palestine#gaza#The Way of All Flesh
6 notes
·
View notes
Text
Exploring the Latest PTE Essay Writing Topics for Academic Success
The Pearson Test of English (PTE) Academic is a widely recognized English language proficiency test that assesses the language skills of non-native English speakers. One of the key components of the PTE Academic exam is the writing section, which includes tasks such as essay writing. Staying updated on the latest essay writing topics is crucial for test-takers to prepare effectively and achieve success in the exam. In this article, we'll explore some of the latest PTE essay writing topics for academic purposes, providing insights and tips for tackling these tasks.
The Impact of Technology on Education:
Technology has revolutionized the field of education, transforming the way students learn and educators teach. This essay topic explores the various ways in which technology has impacted education, including the integration of digital tools in the classroom, online learning platforms, and the accessibility of educational resources. Test-takers can discuss the advantages and disadvantages of technology in education, as well as its potential implications for the future of learning.
Climate Change and Its Effects on the Environment:
Climate change is a pressing global issue that poses significant threats to the environment and human societies. Test-takers may be asked to write an essay discussing the causes and effects of climate change, as well as potential solutions to mitigate its impact. This topic requires critical analysis and a comprehensive understanding of environmental science, policy, and sustainability initiatives.
The Role of Social Media in Modern Society:
Social media has become an integral part of contemporary life, shaping communication, culture, and social interactions. Test-takers may be tasked with writing an essay examining the role of social media in modern society, including its influence on relationships, politics, business, and mental health. This topic invites test-takers to explore the opportunities and challenges posed by social media platforms and to critically evaluate their impact on individuals and communities.
The Importance of Cross-Cultural Understanding in a Globalized World:
In an increasingly interconnected world, cross-cultural understanding and communication are essential skills for navigating diverse societies and contexts. Test-takers may be asked to write an essay discussing the importance of cross-cultural understanding in a globalized world, including its relevance in business, education, diplomacy, and social integration. This topic encourages test-takers to reflect on the value of cultural diversity and to explore strategies for fostering intercultural competence.
The Ethics of Artificial Intelligence:
As artificial intelligence (AI) technologies continue to advance, ethical considerations surrounding their development and deployment have come to the forefront. Test-takers may be prompted to write an essay exploring the ethical implications of AI, including issues related to privacy, automation, job displacement, and bias. This topic challenges test-takers to critically evaluate the ethical dimensions of AI technologies and to propose frameworks for responsible innovation and governance.
Staying informed about the latest PTE essay writing topics is essential for test-takers preparing for the exam. By familiarizing themselves with diverse subject matter and practicing essay writing skills, test-takers can enhance their ability to effectively analyze complex issues, articulate coherent arguments, and demonstrate proficiency in English language communication. With diligent preparation and a solid understanding of key topics, test-takers can approach the PTE Academic Writing section with confidence and achieve their desired scores.
#PTE#PTEAcademic#EnglishLanguage#EssayWriting#TestPreparation#Education#Technology#ClimateChange#SocialMedia#CrossCulturalUnderstanding#ArtificialIntelligence#Ethics#Globalization#ExamPreparation
2 notes
·
View notes
Text
BRAVE NEW WORLD NOVEL
STRAY KIDS UNIVERSE THEORY


Brave New World is a dystopian novel by English author Aldous Huxley, written in 1931 and published in 1932. Largely set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy, the novel anticipates huge scientific advancements in reproductive technology, sleep-learning, psychological manipulation and classical conditioning that are combined to make a dystopian society which is challenged by the story's protagonist. Huxley followed this book with a reassessment in essay form, Brave New World Revisited (1958), and with his final novel, Island (1962), the utopian counterpart. This novel is often compared to George Orwell's Nineteen Eighty-Four (1949).


REPRODUCTIVE TECHNOLOGY
Reproductive technology encompasses all current and anticipated uses of technology in human and animal reproduction, including assisted reproductive technology, contraception and others. It is also termed Assisted Reproductive Technology, where it entails an array of appliances and procedures that enable the realization of safe, improved and healthier reproduction. While this is not true of all people, for an array of married couples, the ability to have children is vital. But through the technology, infertile couples have been provided with options that would allow them to conceive children.

SLEEP-LEARNING
Sleep-learning (also known as hypnopædia or hypnopedia) is an attempt to convey information to a sleeping person, typically by playing a sound recording to them while they sleep. Although sleep is considered an important period for memory consolidation, scientific research has concluded that sleep-learning is not possible. It appears frequently in fiction.

PSYCHOLOGICAL MANIPULATION
In psychology, manipulation is defined as subterfuge designed to influence or control another, usually in a manner which facilitates one's personal aims. The methods used distort or orient the interlocutor's perception of reality, in particular through seduction, suggestion, persuasion and non-voluntary or consensual submission. Definitions for the term vary in which behavior is specifically included, influenced by both culture and whether referring to the general population or used in clinical contexts. Manipulation is generally considered a dishonest form of social influence as it is used at the expense of others.

CLASSICAL CONDITIONING
Classical conditioning (also respondent conditioning and Pavlovian conditioning) is a behavioral procedure in which a biologically potent physiological stimulus (e.g. food) is paired with a neutral stimulus (e.g. the sound of a musical triangle). The term classical conditioning refers to the process of an automatic, conditioned response that is paired with a specific stimulus.


SUMMARY
The novel opens in the World State city of London in AF (After Ford) 632 (AD 2540 in the Gregorian calendar), where citizens are engineered through artificial wombs and childhood indoctrination programmes into predetermined classes (or castes) based on intelligence and labour. Lenina Crowne, a hatchery worker, is popular and sexually desirable, but Bernard Marx, a psychologist, is not. He is shorter in stature than the average member of his high caste, which gives him an inferiority complex. His work with sleep-learning allows him to understand, and disapprove of, his society's methods of keeping its citizens peaceful, which includes their constant consumption of a soothing, happiness-producing drug called “SOMA”. Courting disaster, Bernard is vocal and arrogant about his criticisms, and his boss contemplates exiling him to Iceland because of his nonconformity. His only friend is Helmholtz Watson, a gifted writer who finds it difficult to use his talents creatively in their pain-free society.

Bernard takes a holiday with Lenina outside the World State to a Savage Reservation in New Mexico, in which the two observe natural-born people, disease, the ageing process, other languages, and religious lifestyles for the first time. The culture of the village folk resembles the contemporary Native American groups of the region, descendants of the Anasazi, including the Puebloan peoples of Hopi and Zuni. Bernard and Lenina witness a violent public ritual and then encounter Linda, a woman originally from the World State who is living on the reservation with her son John, now a young man. She, too, visited the reservation on a holiday many years ago, but became separated from her group and was left behind. She had meanwhile become pregnant by a fellow holidaymaker (who is revealed to be Bernard's boss, the Director of Hatcheries and Conditioning). She did not try to return to the World State, because of her shame at her pregnancy. Despite spending his whole life in the reservation, John has never been accepted by the villagers, and his and Linda's lives have been hard and unpleasant. Linda has taught John to read, although from the only book in her possession—a scientific manual—and another book John found: the complete works of Shakespeare. Ostracised by the villagers, John is able to articulate his feelings only in terms of Shakespearean drama, quoting often from The Tempest, King Lear, Othello, Romeo and Juliet and Hamlet. Linda now wants to return to London, and John too, wants to return to see this “brave new world”. Bernard sees an opportunity to thwart plans to exile him, and gets permission to take Linda and John back. On their return to London, John meets the Director and calls him his “father”, a vulgarity which causes a roar of laughter. The humiliated Director resigns in shame before he can follow through with exiling Bernard.
Bernard, as “custodian” of the “savage” John who is now treated as a celebrity, is fawned on by the highest members of society and revels in attention he once scorned. Bernard's popularity is fleeting, though, and he becomes envious that John only really bonds with the literary-minded Helmholtz. Considered hideous and friendless, Linda spends all her time using soma, while John refuses to attend social events organised by Bernard, appalled by what he perceives to be an empty society. Lenina and John are physically attracted to each other, but John's view of courtship and romance, based on Shakespeare's writings, is utterly incompatible with Lenina's freewheeling attitude to sex. She tries to seduce him, but he attacks her, before suddenly being informed that his mother is on her deathbed. He rushes to Linda's bedside, causing a scandal, as this is not the “correct” attitude to d*eath. Some children who enter the ward for “death-conditioning” come across as disrespectful to John, and he attacks one physically. He then tries to break up a distribution of soma to a lower-caste group, telling them that he is freeing them. Helmholtz and Bernard rush in to stop the ensuing riot, which the police quell by spraying soma vapor into the crowd.
Bernard, Helmholtz, and John are all brought before Mustapha Mond, the “Resident World Controller for Western Europe”, who tells Bernard and Helmholtz that they are to be exiled to islands for antisocial activity. Bernard pleads for a second chance, but Helmholtz welcomes the opportunity to be a true individual, and chooses the Falkland Islands as his destination, believing that their bad weather will inspire his writing. Mond tells Helmholtz that exile is actually a reward. The islands are full of the most interesting people in the world, individuals who did not fit into the social model of the World State. Mond outlines for John the events that led to the present society and his arguments for a caste system and social control. John rejects Mond's arguments, and Mond sums up John's views by claiming that John demands “the right to be unhappy”. John asks if he may go to the islands as well, but Mond refuses, saying he wishes to see what happens to John next.
Jaded with his new life, John moves to an abandoned hilltop lighthouse, near the village of Puttenham, where he intends to adopt a solitary ascetic lifestyle in order to purify himself of civilization, practising self-flagellation. This draws reporters and eventually hundreds of amazed sightseers, hoping to witness his bizarre behaviour.
For a while it seems that John might be left alone, after the public's attention is drawn to other diversions, but a documentary maker has secretly filmed John's self-flagellation from a distance, and when released the documentary causes an international sensation. Helicopters arrive with more journalists. Crowds of people descend on John's retreat, demanding that he perform his whipping ritual for them. From one helicopter a young woman emerges who is implied to be Lenina. John, at the sight of a woman he both adores and loathes, whips at her in a fury and then turns the whip on himself, exciting the crowd, whose wild behaviour transforms into a soma-fuelled orgy. The next morning John awakes on the ground and is consumed by remorse over his participation in the night's events.
That evening, a swarm of helicopters appears on the horizon, the story of last night's orgy having been in all the papers. The first onlookers and reporters to arrive find that John is dead, having hanged himself.
From: Wikipedia
2 notes
·
View notes
Text
WriteBotIQ.com Review: An AI-powered Revolution in Content Creation
In the rapidly evolving world of content marketing, staying ahead of the curve is crucial. This means not just following trends but also leveraging emerging technologies. One such groundbreaking innovation is AI-powered writing assistants, and leading the pack in this domain is WriteBotIQ.com. This detailed WriteBotIQ.com review will offer a comprehensive exploration of this dynamic tool and highlight how it is redefining content creation.
Setting the Scene: The Era of AI in Content Creation
The digital landscape is overflowing with content. Amid this overload, crafting unique, high-quality content consistently is a tall order. Enter WriteBotIQ.com, an AI writing assistant designed to streamline your content creation process. From long-form articles to engaging social media posts and persuasive marketing emails, WriteBotIQ.com simplifies it all.
Long-form Content Writing: A League of Its Own
When it comes to long-form content writing, WriteBotIQ.com is second to none. It brings together the power of artificial intelligence and deep learning to help users craft comprehensive, engaging, and high-quality content. Regardless of your niche or audience, WriteBotIQ.com generates content that resonates with your target demographic and aligns with your unique voice.
This is where WriteBotIQ.com excels - understanding the nuances of your content requirements and producing top-notch content accordingly. It takes the stress and time out of writing, allowing you to focus on other vital aspects of your business.
A Treasure Trove of Templates
A significant feature of WriteBotIQ.com that deserves mention in this review is its extensive library of templates. With over 60 templates to choose from, this platform has a solution for every content need.
Whether you are crafting a compelling blog post, engaging your audience on social media, or looking to send out a powerful marketing email, WriteBotIQ.com's templates are your best bet. These templates are designed to simplify your content creation process, letting the AI handle the heavy lifting.
Pricing That Puts You First
One aspect that sets WriteBotIQ.com apart from many of its counterparts is its affordable pricing. At less than $20 a month, you gain access to an unlimited number of prompts, letting you generate as much content as you need without worrying about additional costs.
When you compare this cost with hiring professional content writers or investing personal time in writing, the value proposition of WriteBotIQ.com becomes apparent. It is a cost-effective solution for businesses and individuals who understand the importance of high-quality content but also value efficiency and affordability.
Embracing Multilingualism
In today's globalized world, businesses must cater to a diverse audience. WriteBotIQ.com recognizes this and supports content creation in 20 different languages. This feature allows you to reach a broader audience and connect with them in a language they understand best.
Whether you need content in English, Spanish, French, German, or any other supported language, WriteBotIQ.com has you covered. This multilingual support is a game-changer for businesses looking to expand their reach and engage with international audiences.
Your Personal Advisory Chatbots
Another innovative offering from WriteBotIQ.com is its personalized advisory chatbots. These chatbots serve as virtual advisors across a range of topics such as SEO, business coaching, accounting, and more.
The SEO Advisor, for example, provides actionable SEO tips to help boost your website's ranking on search engines. Similarly, the Business Coach offers valuable advice to navigate the complex business landscape. The accountant chatbot can guide you through financial matters, helping you make informed decisions.
This additional layer of value beyond content creation makes WriteBotIQ.com a comprehensive tool that caters to various needs, making it a worthy investment for businesses and individuals alike.
Discover Your Path to Success: WriteBotIQ.com's Rewarding Affiliate Program!
As we explore the diverse offerings of WriteBotIQ.com, a true gem emerges - their highly rewarding affiliate program. This exceptional opportunity beckons you to collaborate with WriteBotIQ.com, offering a chance to earn substantial revenue by promoting their esteemed and high-value services. If you're an individual seeking to monetize your influence and expand your horizons, WriteBotIQ.com's affiliate program presents an enticing avenue and yet another compelling reason to choose them.
Earn a remarkable 25% commission on all sales generated through your affiliate referrals. For each new customer you introduce to WriteBotIQ.com, you secure a quarter of the sales amount - a powerful incentive for long-term growth. Given WriteBotIQ.com's competitive pricing and unparalleled value, this opportunity holds the potential for significant and sustainable earnings.
Partnering with WriteBotIQ.com also means aligning yourself with a trusted and esteemed brand, giving you the confidence to endorse a product that excels in its services. The prospect of recurring income further enhances the allure of their affiliate program, positioning it among the most sought-after opportunities in the AI content creation landscape.
In conclusion, WriteBotIQ.com's affiliate program extends beyond mere commissions; it opens doors to collaborate with a leading brand in the thriving AI writing industry. This win-win scenario merits serious consideration, whether you're already a WriteBotIQ.com customer or a savvy affiliate marketer seeking the next big venture.
Unleash your potential for success with WriteBotIQ.com's Affiliate Program today! Embrace this opportunity to elevate your earnings and establish your presence as a driving force in the AI content creation realm.
Wrapping Up Our WriteBotIQ.com Review
In conclusion, WriteBotIQ.com is a powerful AI writing assistant that has revolutionized the way businesses and individuals approach content creation. Its standout features - high-quality long-form content writing, an extensive template library, affordable pricing, multilingual support, and personal advisory chatbots - make it a comprehensive solution for all content needs.
This WriteBotIQ.com review paints a clear picture: whether you are a professional writer looking to enhance productivity, a business owner aiming to boost your content marketing strategy, or a blogger aspiring to engage your audience better, WriteBotIQ.com is the tool for you.
Discover how WriteBotIQ.com can transform your content creation process and elevate your digital presence. The future of content creation is here, and it's waiting for you at WriteBotIQ.com.
#ai#ai writing#ai writer#ai content#ai content creation#artificial intelligence#autoblog#ai blog writer#ai blog post writer
3 notes
·
View notes
Text
This absolutely has to be a social media thing (in addition to a "teens being teens" thing, amongst other factors). While 600 words is only one and a half pages of text at most (at least according to Word Counter, using 12 pt Times New Roman as an example) the way screens are structured, 600 words probably looks like a tl;dr text wall, and that's even if you're allowed on a site that lets you post 600 words at a time.
As an example, Bluesky allows 300 characters per post. Assuming six characters per word (including spaces and punctuation, taken from various search results and also to make relatively easy math) you would need to make a thread of 12 posts to write the 3600 characters that make up those 600 words. Sure, there are ways to abbreviate or emojify or whatever, so you might be able to get your 600-word point across but you're still going to have a bunch of actual adults wanting a thread reader app to unroll everything onto a single screen. The writer may also be inclined to add to their 600 words by adding a summary at the end per social custom, making the thread even longer. And remember, this is just Bluesky; could you imagine (or remember) what 600 words on 140-character Twitter looked like.
Of course, if there wasn't ChatGPT, teens would find ways to avoid writing 600 word essays for homework because teens are still children and don't understand how the skill of writing can be useful long-term. All they know is that they don't want to do things they don't want to do, and thus will seek opportunities to avoid having to do it. It's the academic equivalent of trying to hide your vegetables in a napkin or giving them to your dog instead of eating them yourself. I know I had my fair share of essay topics I didn't care about (my high school English courses liked to do five-paragraph essays within 30 minutes in class. Those were probably closer to 200 words than 600) but I had to do them anyway. But this is a digression. The point was to demonstrate how long 600 words looks on a screen.
But don’t get me wrong. Generative Artificial Intelligence and Predictive Text Engines like ChatGPT are still a problem that needs dealt with. It’s up to all of us to explain and discourage people from using it. What’s frustrating is that algorithmic tools in fact can be useful to improve aspects of life that we don’t like. As much as you probably ignore spellcheck and grammar check (does anyone ever use formatting or markup text?), those are the kind of tools that self-improving algorithms should be used to make or improve. AI shouldn’t be making new things for us, it should be improving the things we are already making. Don’t know what you want to make or how to make it? That should be what human learning is for! I know I’m digressing again, but I feel like this part was more necessary because AI should be decried at every opportunity. However, I cannot go too far away from my original point of how 600 words will look way too long to a teen who isn’t interesting in writing and who usually uses their device of choice to look at social media sites in their free time.
Did you want an example of what 600 words looks like on a screen? According to LibreOffice Writer, this post is exactly 600 words long. How did you feel about seeing this post?
im still losing it over the "how did high schoolers write 600 word essays before chatgpt" post. 600 words. that is nothing. that is so few words what do you mean you can't write 600 words. 600 words. this post right here is 45 words.
97K notes
·
View notes
Text
AI Tools in Marathi for Academic Excellence
Introduction
In today’s fast-moving digital world, students need access to modern technology—but in a language they understand. AI in Marathi makes this possible. It introduces school and college learners to the power of artificial intelligence in their mother tongue, making complex topics easier to grasp and use.
Academic Support Made Simple
AI tools in Marathi are a game changer for students. From note summarization and voice typing to grammar correction and vocabulary building, these tools assist with day-to-day learning tasks. Students no longer have to depend on English platforms to research or complete assignments. With AI support in Marathi, they learn faster and more efficiently.
AI chatbots in Marathi can help students resolve doubts instantly. Tools can even generate ideas for essays, translate texts, or explain difficult concepts. It saves time and boosts confidence—especially for those from semi-urban or rural backgrounds.
Tools for Independent Learning
These tools encourage self-learning by giving students a personalized, responsive platform to explore. AI in Marathi is now being used in smart classrooms, educational apps, and online assessments. Teachers also use these tools to prepare lessons or provide additional learning materials.
Parents are also seeing the benefit, as their children use AI tools in Marathi to improve reading, writing, and speaking skills in both languages.
Conclusion
Empowering students with AI tools in Marathi means giving them a fair chance at digital success. It bridges the learning gap and ensures students don’t get left behind due to language constraints. As more educational platforms adopt AI in Marathi, student engagement, academic performance, and future readiness are sure to rise.
0 notes
Text
Best AI Tools in Marathi You Can Start Using Today
The use of artificial intelligence is no longer limited to English-speaking audiences. With the rapid development of AI in Marathi, smart technology is now accessible to Marathi-speaking communities across Maharashtra and beyond. This evolution is helping users from all walks of life enhance their daily routines and digital literacy.
From content creation to idea generation and translation, AI tools in Marathi are transforming the way we write, communicate, and learn. These tools allow users to compose essays, design posters, summarize documents, and even interact with chatbots—all in Marathi.
Popular platforms like ChatGPT, Grammarly, and Canva now support AI in Marathi, offering a more inclusive user experience. These tools are not only bridging the language gap but also increasing confidence among students, educators, and professionals.
The availability of AI tools in Marathi ensures that language is no longer a limitation. It allows users to fully participate in the digital economy, drive creativity, and improve productivity—using the language they know best.
0 notes
Text
AI in Marathi for Beginners
The rise of artificial intelligence is reshaping industries—but to make its benefits inclusive, local-language access is essential. Explore AI in Marathi on AmeyPangarkar.com to discover how regional users can learn, apply, and grow with AI tools tailored in the Marathi language. This platform is designed to simplify AI learning for native speakers and bridge the gap between innovation and accessibility.
Today, professionals from various fields are using AI tools in Marathi for tasks such as data summarization, automated design, content generation, and more. These tools allow Marathi-speaking students, freelancers, and entrepreneurs to adapt to the future of work—without needing fluency in English. The content is user-friendly, easy to grasp, and created with a local-first approach.
From simplifying workflows to enhancing productivity, AI in Marathi offers real value. Whether you're a teacher creating assignments, a small business owner automating social media posts, or a student writing essays, regional AI tools help you do more, faster. By making AI accessible in your native tongue, platforms like Amey Pangarkar's ensure no one is left behind in the digital revolution.
0 notes