#Artificial Intelligence training for student
Explore tagged Tumblr posts
aitrainingcourses · 11 months ago
Text
Why AI Courses are in High Demand among Professionals 
Tumblr media
Nowadays, AI or artificial intelligence is transforming the world. With this, the demand for professionals who can use the strengths of AI is increasing day by day. With the advent and growth of AI, many people are using AI systems to work easily.
The future of every sector demands that professionals have AI skills to stay ahead in this competitive job market. Thus, today, many AI courses are in high demand. Many students and professionals are trying to learn several technical and non-technical AI certification courses to get good jobs.
So, let us see why an Artificial intelligence course is becoming essential for career growth.
What is Artificial Intelligence?
AI is a branch of computer science that can create intelligent machines that work like humans. The process of creating these intelligent machines is powered by AI. These machines work in the same way as we humans while working with data and making the right decisions.
Reasons AI Courses are in High Demand
●       It Offers Job Security
In this digital era, the job market is evolving, so employers are looking for skilled professionals. Today, AI skills have become quite valuable, and hence, professionals who have AI skills can get secure jobs quite easily.
Whether it’s analyzing data, creating AI algorithms, or using the right AI solutions, these skills are in high demand. Thus, with AI courses and skills, professionals can enjoy job security and career growth.
●       It is Transforming Every Industry
Nowadays, AI isn’t only for tech companies, as it can help every sector. From finance, where it can predict stock trends to healthcare, where AI can diagnose diseases, AI is transforming everything.
Thus, professionals who have strong knowledge of AI are now changing how a company operates. Thus, these professionals can help their companies to stay ahead of the competition.
●       It Offers Data-Driven Decisions
Data is crucial for the success of any business. There are many AI courses that teach professionals the right way to analyze large volumes of data, along with getting important insights.
Skilled professionals, who can use AI, are able to easily handle complex problems. Thus, they can improve business outcomes quite easily.
●       It Can Drive Innovation
Today, businesses are trying to find ways to improve and innovate. AI can offer some powerful tools that can help many companies to improve.
Thus, professionals with remarkable AI skills can help their companies to streamline operations, develop new products, and improve customer experience. This ability to drive innovation can easily help a company to become successful in the market.
●       It Can Help in Personal Growth
Learning some AI skills are both intellectually stimulating and exciting. It offers professionals a scope to solve complex issues, and use cutting-edge technology. This incredible personal growth can prove to be rewarding after some time.
●       It Improves User Experience
Products of AI technology like virtual assistants and chat bots can help a lot in increasing user experience. Thus, companies are looking for skilled AI professionals nowadays.
Thus, in this new era, companies are now looking for professionals who have completed at least one AI Certification Course. Hence, students are searching for the best AI courses for beginners that can help them to become successful.
1 note · View note
lsdunesarchive · 2 years ago
Text
lsdunes: Lost Souls, the music video for Old Wounds will be yours this Friday the 29th at 9am PT / 12pm ET 🦂
🎥: @.iammethisisi
(L.S. Dunes Instagram | September 25, 2023)
11 notes · View notes
nearlearn6 · 2 months ago
Text
Tumblr media
Explore the basics of blockchain technology in a simple and visual way. From how it works to real-world uses, this infographic covers key concepts—and shows how Nearlearn can help you build a career in blockchain.
Once checkout the Nearlearn website :
https://nearlearn.com/courses/blockchain/blockchain-certification-training
0 notes
anonymousdormhacks · 3 months ago
Text
Tumblr media
Google says alexander "cheated on his wife" hamilton rights ig
1 note · View note
justposting1 · 8 months ago
Text
Top AI Tools to Start Your Training in 2024
Empower Your AI Journey with Beginner-Friendly Platforms Like TensorFlow, PyTorch, and Google Colab The rapid advancements in artificial intelligence (AI) have transformed the way we work, live, and learn. For aspiring AI enthusiasts, diving into this exciting field requires a combination of theoretical understanding and hands-on experience. Fortunately, the right tools can make the learning…
0 notes
yesthatsatumbler · 1 year ago
Text
I tend to think of AI responses as being a lot like those D+ students who get asked something at an exam and aren't actually very sure of the answers, and have to quickly make up something that vaguely sounds like it makes sense and hope it's close enough to count.
And, like, sometimes their association web is good enough that they stumble into the right answer (and sometimes the right answer was something obvious all along so they just happen to guess correctly). But a lot of the time it's just a pile of nonsense that they think sounds vaguely right.
...silly thought: I guess the way AI training works is pretty much sending them through gazillions of simulated exams and grading them on whether their replies are close enough to correct answers to count, and then hoping that by trial and error they build up enough of the right association web to get correct(ish) answers more often than not. But they're still fundamentally making stuff up every single time.
(And it only works at all because they're doing absolutely insane amounts of said trial-and-error.)
AI doesn't know things.
AI is playing improv.
This is is a key difference and should shape how you think about AI and what it can do.
567 notes · View notes
hob28 · 1 year ago
Text
0 notes
strategiadvizo · 1 year ago
Text
Transforming Education: Unleash the Potential of Your Students with Strategia Advizo's Vocational Courses
Introduction: In today’s rapidly evolving world, the traditional education system faces the challenge of keeping up with the pace of technological advancements and changing job landscapes. At Strategia Advizo, we believe in empowering the next generation with the skills they need to navigate and succeed in the 21st century. Our suite of vocational courses, designed specifically for CBSE schools…
Tumblr media
View On WordPress
0 notes
mariacallous · 15 days ago
Text
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.
Eugene, serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.
When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
As we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.
“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.
Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.
“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”
I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.
It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.
There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”
Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.
Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.
But for English departments, and for college writing in general, the arrival of A.I. has been more vexed. Why bother teaching writing now? The future of the midterm essay may be a quaint worry compared with larger questions about the ramifications of artificial intelligence, such as its effect on the environment, or the automation of jobs. And yet has there ever been a time in human history when writing was so important to the average person? E-mails, texts, social-media posts, angry missives in comments sections, customer-service chats—let alone one’s actual work. The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of “mass writing.” It’s possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.
Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism. Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.
Robin told me he finds many of the steps that universities have taken to combat A.I. essays to be “hand-holding that’s not leading people anywhere.” He has become a believer in the passage-identification blue-book exam, in which students name and contextualize excerpts of what they’ve read for class. “Know the text and write about it intelligently,” he said. “That was a way of honoring their autonomy without being a cop.”
His daughter, who is now a senior, complains that her teachers rarely assign full books. And Robin has noticed that college students are more comfortable with excerpts than with entire articles, and prefer short stories to novels. “I don’t get the sense they have the kind of literary or cultural mastery that used to be the assumption upon which we assigned papers,” he said. One study, published last year, found that fifty-eight per cent of students at two Midwestern universities had so much trouble interpreting the opening paragraphs of “Bleak House,” by Charles Dickens, that “they would not be able to read the novel on their own.” And these were English majors.
The return to pen and paper has been a common response to A.I. among professors, with sales of blue books rising significantly at certain universities in the past two years. Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.
But other professors have renewed their emphasis on getting students to see the value of process. Dan Melzer, the director of the first-year composition program at the University of California, Davis, recalled that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think about how writing functions across the curriculum so that all students, from prospective scientists to future lawyers, get a chance to hone their prose. Consequently, he has an accommodating view of how norms around communication have changed, especially in the internet age. He was sympathetic to kids who viewed some of their assignments as dull and mechanical and turned to ChatGPT to expedite the process. He called the five-paragraph essay—the classic “hamburger” structure, consisting of an introduction, three supporting body paragraphs, and a conclusion—“outdated,” having descended from élitist traditions.
Melzer believes that some students loathe writing because of how it’s been taught, particularly in the past twenty-five years. The No Child Left Behind Act, from 2002, instituted standards-based reforms across all public schools, resulting in generations of students being taught to write according to rigid testing rubrics. As one teacher wrote in the Washington Post in 2013, students excelled when they mastered a form of “bad writing.” Melzer has designed workshops that treat writing as a deliberative, iterative process involving drafting, feedback (from peers and also from ChatGPT), and revision.
“If you assign a generic essay topic and don’t engage in any process, and you just collect it a month later, it’s almost like you’re creating an environment tailored to crime,” he said. “You’re encouraging crime in your community!”
I found Melzer’s pedagogical approach inspiring; I instantly felt bad for routinely breaking my class into small groups so that they could “workshop” their essays, as though the meaning of this verb were intuitively clear. But, as a student, I’d have found Melzer’s focus on process tedious—it requires a measure of faith that all the work will pay off in the end. Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory. “You can imagine that students really want to be there,” he joked.
College is all about opportunity costs. One way of viewing A.I. is as an intervention in how people choose to spend their time. In the early nineteen-sixties, college students spent an estimated twenty-four hours a week on schoolwork. Today, that figure is about fifteen, a sign, to critics of contemporary higher education, that young people are beneficiaries of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the diligence of their forebears. I don’t know how many hours I spent on schoolwork in the late nineties, when I was in college, but I recall feeling that there was never enough time. I suspect that, even if today’s students spend less time studying, they don’t feel significantly less stressed. It’s the nature of campus life that everyone assimilates into a culture of busyness, and a lot of that anxiety has been shifted to extracurricular or pre-professional pursuits. A dean at Harvard remarked that students feel compelled to find distinction outside the classroom because they are largely indistinguishable within it.
Eddie, a sociology major at Long Beach State, is older than most of his classmates. He graduated high school in 2010, and worked full time while attending a community college. “I’ve gone through a lot to be at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his therapist recommended to him, was ubiquitous at Long Beach even before the California State University system, which Long Beach is a part of, announced a partnership with OpenAI, giving its four hundred and sixty thousand students access to ChatGPT Edu. “I was a little suspicious of how convenient it was,” Eddie said. “It seemed to know a lot, in a way that seemed so human.”
He told me that he used A.I. “as a brainstorm” but never for writing itself. “I limit myself, for sure.” Eddie works for Los Angeles County, and he was talking to me during a break. He admitted that, when he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to do things ethically, but if I’m rushing to work I don’t feel bad about that,” particularly for courses outside his major.
I recognized Eddie’s conflict. I’ve used ChatGPT a handful of times, and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency. I’ve felt the need to stop myself from indulging in idle queries. Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager. And for students who use it the way Eddie did, as a kind of sounding board, there’s no clear threshold where the work ceases to be an original piece of thinking. In April, Anthropic, the company behind Claude, released a report drawn from a million anonymized student conversations with its chatbots. It suggested that more than half of user interactions could be classified as “collaborative,” involving a dialogue between student and A.I. (Presumably, the rest of the interactions were more extractive.)
May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes. “I don’t think it’s made me a worse writer,” she said. “It’s perhaps made me a less patient writer. I used to spend hours writing essays, nitpicking over my wording, really thinking about how to phrase things.” College had made her reflect on her experience at an extremely competitive high school, where she had received top grades but retained very little knowledge. As a result, she was the rare student who found college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen her engagement with the courses she felt passionate about. “I was trying to think, Where’s all this time going?” she said. I had never envied a college student until she told me the answer: “I sleep more now.”
Harry Stecopoulos oversees the University of Iowa’s English department, which has more than eight hundred majors. On the first day of his introductory course, he asks students to write by hand a two-hundred-word analysis of the opening paragraph of Ralph Ellison’s “Invisible Man.” There are always a few grumbles, and students have occasionally walked out. “I like the exercise as a tone-setter, because it stresses their writing,” he told me.
The return of blue-book exams might disadvantage students who were encouraged to master typing at a young age. Once you’ve grown accustomed to the smooth rhythms of typing, reverting to a pen and paper can feel stifling. But neuroscientists have found that the “embodied experience” of writing by hand taps into parts of the brain that typing does not. Being able to write one way—even if it’s more efficient—doesn’t make the other way obsolete. There’s something lofty about Stecopoulos’s opening-day exercise. But there’s another reason for it: the handwritten paragraph also begins a paper trail, attesting to voice and style, that a teaching assistant can consult if a suspicious paper is submitted.
Kevin, a third-year student at Syracuse University, recalled that, on the first day of a class, the professor had asked everyone to compose some thoughts by hand. “That brought a smile to my face,” Kevin said. “The other kids are scratching their necks and sweating, and I’m, like, This is kind of nice.”
Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I. to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said. Like many former holdouts, Kevin felt that his judicious use of A.I. was more defensible than his peers’ use of it.
As ChatGPT begins to sound more human, will we reconsider what it means to sound like ourselves? Kevin and some of his friends pride themselves on having an ear attuned to A.I.-generated text. The hallmarks, he said, include a preponderance of em dashes and a voice that feels blandly objective. An acquaintance had run an essay that she had written herself through a detector, because she worried that she was starting to phrase things like ChatGPT did. He read her essay: “I realized, like, It does kind of sound like ChatGPT. It was freaking me out a little bit.”
A particularly disarming aspect of ChatGPT is that, if you point out a mistake, it communicates in the backpedalling tone of a contrite student. (“Apologies for the earlier confusion. . . .”) Its mistakes are often referred to as hallucinations, a description that seems to anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant. Some professors told me that they had students fact-check ChatGPT’s work, as a way of discussing the importance of original research and of showing the machine’s fallibility. Hallucination rates have grown worse for most A.I.s, with no single reason for the increase. As a researcher told the Times, “We still don’t know how these models work exactly.”
But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs. Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said. Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”
Kevin, by contrast, desired a more general kind of moral distinction. I asked if he would be bothered to receive a lower grade on an essay than a classmate who’d used ChatGPT. “Part of me is able to compartmentalize and not be pissed about it,” he said. “I developed myself as a human. I can have a superiority complex about it. I learned more.” He smiled. But then he continued, “Part of me can also be, like, This is so unfair. I would have loved to hang out with my friends more. What did I gain? I made my life harder for all that time.”
In my conversations, just as college students invariably thought of ChatGPT as merely another tool, people older than forty focussed on its effects, drawing a comparison to G.P.S. and the erosion of our relationship to space. The London cabdrivers rigorously trained in “the knowledge” famously developed abnormally large posterior hippocampi, the part of the brain crucial for long-term memory and spatial awareness. And yet, in the end, most people would probably rather have swifter travel than sharper memories. What is worth preserving, and what do we feel comfortable off-loading in the name of efficiency?
What if we take seriously the idea that A.I. assistance can accelerate learning—that students today are arriving at their destinations faster? In 2023, researchers at Harvard introduced a self-paced A.I. tutor in a popular physics course. Students who used the A.I. tutor reported higher levels of engagement and motivation and did better on a test than those who were learning from a professor. May, the Georgetown student, told me that she often has ChatGPT produce extra practice questions when she’s studying for a test. Could A.I. be here not to destroy education but to revolutionize it? Barry Lam teaches in the philosophy department at the University of California, Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies philosophical modes of inquiry to everyday topics. He began wondering what it would mean for A.I. to actually be a productivity tool. He spoke to me from the podcast studio he built in his shed. “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand? If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.
“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.
Lam doesn’t find the use of A.I. morally indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued, because there’s technically no original version. Rather, he finds it a potential waste of everyone’s time. At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”
Nobody gets into teaching because he loves grading papers. I talked to one professor who rhapsodized about how much more his students were learning now that he’d replaced essays with short exams. I asked if he missed marking up essays. He laughed and said, “No comment.” An undergraduate at Northeastern University recently accused a professor of using A.I. to create course materials; she filed a formal complaint with the school, requesting a refund for some of her tuition. The dustup laid bare the tension between why many people go to college and why professors teach. Students are raised to understand achievement as something discrete and measurable, but when they arrive at college there are people like me, imploring them to wrestle with difficulty and abstraction. Worse yet, they are told that grades don’t matter as much as they did when they were trying to get into college—only, by this point, students are wired to find the most efficient path possible to good marks.
As the craft of writing is degraded by A.I., original writing has become a valuable resource for training language models. Earlier this year, a company called Catalyst Research Alliance advertised “academic speech data and student papers” from two research studies run in the late nineties and mid-two-thousands at the University of Michigan. The school asked the company to halt its work—the data was available for free to academics anyway—and a university spokesperson said that student data “was not and has never been for sale.” But the situation did lead many people to wonder whether institutions would begin viewing original student work as a potential revenue stream.
According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined since 2012. An assessment of tens of thousands of adults in nearly thirty countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.E.C.D., hypothesized that the way we consume information today—often through short social-media posts—has something to do with the decline in literacy. (One of Europe’s top performers in the assessment was Estonia, which recently announced that it will bring A.I. to some high-school students in the next few years, sidelining written essays and rote homework exercises in favor of self-directed learning and oral exams.)
Lam, the philosophy professor, used to be a colleague of mine, and for a brief time we were also neighbors. I’d occasionally look out the window and see him building a fence, or gardening. He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced that there is value to learning how to do things the annoying, old-fashioned, and—as he puts it—“artisanal” way. He told me that his wife, Shanna Andrawis, who has been a high-school teacher since 2008, frequently disagreed with his cavalier methods for dealing with large learning models. Andrawis argues that dishonesty has always been an issue. “We are trying to mass educate,” she said, meaning there’s less room to be precious about the pedagogical process. “I don’t have conversations with students about ‘artisanal’ writing. But I have conversations with them about our relationship. Respect me enough to give me your authentic voice, even if you don’t think it’s that great. It’s O.K. I want to meet you where you’re at.”
Ultimately, Andrawis was less fearful of ChatGPT than of the broader conditions of being young these days. Her students have grown increasingly introverted, staring at their phones with little desire to “practice getting over that awkwardness” that defines teen life, as she put it. A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s “a little cherry on top of an already really bad ice-cream sundae,” she said.
When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.
Except for a year spent bumming around my home town, I’ve basically been on a campus for the past thirty years. Students these days view college as consumers, in ways that never would have occurred to me when I was their age. They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking. Although I’ve empathized with my students’ various mini-dramas, I rarely project myself into their lives. I notice them noticing one another, and I let the mysteries of their lives go. Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.
Education, particularly in the humanities, rests on a belief that, alongside the practical things students might retain, some arcane idea mentioned in passing might take root in their mind, blossoming years in the future. A.I. allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human. I often tell my students that this is the last time in their lives that someone will have to read something they write, so they might as well tell me what they actually think.
Despite all the current hysteria around students cheating, they aren’t the ones to blame. They did not lobby for the introduction of laptops when they were in elementary school, and it’s not their fault that they had to go to school on Zoom during the pandemic. They didn’t create the A.I. tools, nor were they at the forefront of hyping technological innovation. They were just early adopters, trying to outwit the system at a time when doing so has never been so easy. And they have no more control than the rest of us. Perhaps they sense this powerlessness even more acutely than I do. One moment, they are being told to learn to code; the next, it turns out employers are looking for the kind of “soft skills” one might learn as an English or a philosophy major. In February, a labor report from the Federal Reserve Bank of New York reported that computer-science majors had a higher unemployment rate than ethnic-studies majors did—the result, some believed, of A.I. automating entry-level coding jobs.
None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.
When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village. He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus. 
348 notes · View notes
mostlysignssomeportents · 9 months ago
Text
Conspiratorialism as a material phenomenon
Tumblr media
I'll be in TUCSON, AZ from November 8-10: I'm the GUEST OF HONOR at the TUSCON SCIENCE FICTION CONVENTION.
Tumblr media
I think it behooves us to be a little skeptical of stories about AI driving people to believe wrong things and commit ugly actions. Not that I like the AI slop that is filling up our social media, but when we look at the ways that AI is harming us, slop is pretty low on the list.
The real AI harms come from the actual things that AI companies sell AI to do. There's the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:
https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/
Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called "empiricism-washing," and you know you're experiencing it when you hear some variation on "it's just math, math can't be racist":
https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology
When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an "accountability sink" that allows the company to disclaim responsibility for the thefts:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
When AI is used to perform high-velocity "decision support" that is supposed to inform a "human in the loop," it quickly overwhelms its human overseer, who takes on the role of "moral crumple zone," pressing the "OK" button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
But it's potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to "upcode" a patient's treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don't have time to treat their patients:
https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
My point is that "worrying about AI" is a zero-sum game. When we train our fire on the stuff that isn't important to the AI stock swindlers' business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.
AI hasn't attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system – from investors, from customers, from easily gulled big-city mayors – is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they're not as visually arresting, but discrediting them is financially arresting, and that's what really matters.
All that said: AI slop is real, there is a lot of it, and just because it doesn't warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering.
AI slop has turned Facebook into an anaerobic lagoon of botshit, just the laziest, grossest engagement bait, much of it the product of rise-and-grind spammers who avidly consume get rich quick "courses" and then churn out a torrent of "shrimp Jesus" and fake chainsaw sculptures:
https://www.404media.co/email/1cdf7620-2e2f-4450-9cd9-e041f4f0c27f/
For poor engagement farmers in the global south chasing the fractional pennies that Facebook shells out for successful clickbait, the actual content of the slop is beside the point. These spammers aren't necessarily tuned into the psyche of the wealthy-world Facebook users who represent Meta's top monetization subjects. They're just trying everything and doubling down on anything that moves the needle, A/B splitting their way into weird, hyper-optimized, grotesque crap:
https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/
In other words, Facebook's AI spammers are laying out a banquet of arbitrary possibilities, like the letters on a Ouija board, and the Facebook users' clicks and engagement are a collective ideomotor response, moving the algorithm's planchette to the options that tug hardest at our collective delights (or, more often, disgusts).
So, rather than thinking of AI spammers as creating the ideological and aesthetic trends that drive millions of confused Facebook users into condemning, praising, and arguing about surreal botshit, it's more true to say that spammers are discovering these trends within their subjects' collective yearnings and terrors, and then refining them by exploring endlessly ramified variations in search of unsuspected niches.
(If you know anything about AI, this may remind you of something: a Generative Adversarial Network, in which one bot creates variations on a theme, and another bot ranks how closely the variations approach some ideal. In this case, the spammers are the generators and the Facebook users they evince reactions from are the discriminators)
https://en.wikipedia.org/wiki/Generative_adversarial_network
I got to thinking about this today while reading User Mag, Taylor Lorenz's superb newsletter, and her reporting on a new AI slop trend, "My neighbor’s ridiculous reason for egging my car":
https://www.usermag.co/p/my-neighbors-ridiculous-reason-for
The "egging my car" slop consists of endless variations on a story in which the poster (generally a figure of sympathy, canonically a single mother of newborn twins) complains that her awful neighbor threw dozens of eggs at her car to punish her for parking in a way that blocked his elaborate Hallowe'en display. The text is accompanied by an AI-generated image showing a modest family car that has been absolutely plastered with broken eggs, dozens upon dozens of them.
According to Lorenz, variations on this slop are topping very large Facebook discussion forums totalling millions of users, like "Movie Character…,USA Story, Volleyball Women, Top Trends, Love Style, and God Bless." These posts link to SEO sites laden with programmatic advertising.
The funnel goes:
i. Create outrage and hence broad reach;
ii, A small percentage of those who see the post will click through to the SEO site;
iii. A small fraction of those users will click a low-quality ad;
iv. The ad will pay homeopathic sub-pennies to the spammer.
The revenue per user on this kind of scam is next to nothing, so it only works if it can get very broad reach, which is why the spam is so designed for engagement maximization. The more discussion a post generates, the more users Facebook recommends it to.
These are very effective engagement bait. Almost all AI slop gets some free engagement in the form of arguments between users who don't know they're commenting an AI scam and people hectoring them for falling for the scam. This is like the free square in the middle of a bingo card.
Beyond that, there's multivalent outrage: some users are furious about food wastage; others about the poor, victimized "mother" (some users are furious about both). Not only do users get to voice their fury at both of these imaginary sins, they can also argue with one another about whether, say, food wastage even matters when compared to the petty-minded aggression of the "perpetrator." These discussions also offer lots of opportunity for violent fantasies about the bad guy getting a comeuppance, offers to travel to the imaginary AI-generated suburb to dole out a beating, etc. All in all, the spammers behind this tedious fiction have really figured out how to rope in all kinds of users' attention.
Of course, the spammers don't get much from this. There isn't such a thing as an "attention economy." You can't use attention as a unit of account, a medium of exchange or a store of value. Attention – like everything else that you can't build an economy upon, such as cryptocurrency – must be converted to money before it has economic significance. Hence that tooth-achingly trite high-tech neologism, "monetization."
The monetization of attention is very poor, but AI is heavily subsidized or even free (for now), so the largest venture capital and private equity funds in the world are spending billions in public pension money and rich peoples' savings into CO2 plumes, GPUs, and botshit so that a bunch of hustle-culture weirdos in the Pacific Rim can make a few dollars by tricking people into clicking through engagement bait slop – twice.
The slop isn't the point of this, but the slop does have the useful function of making the collective ideomotor response visible and thus providing a peek into our hopes and fears. What does the "egging my car" slop say about the things that we're thinking about?
Lorenz cites Jamie Cohen, a media scholar at CUNY Queens, who points out that subtext of this slop is "fear and distrust in people about their neighbors." Cohen predicts that "the next trend, is going to be stranger and more violent.”
This feels right to me. The corollary of mistrusting your neighbors, of course, is trusting only yourself and your family. Or, as Margaret Thatcher liked to say, "There is no such thing as society. There are individual men and women and there are families."
We are living in the tail end of a 40 year experiment in structuring our world as though "there is no such thing as society." We've gutted our welfare net, shut down or privatized public services, all but abolished solidaristic institutions like unions.
This isn't mere aesthetics: an atomized society is far more hospitable to extreme wealth inequality than one in which we are all in it together. When your power comes from being a "wise consumer" who "votes with your wallet," then all you can do about the climate emergency is buy a different kind of car – you can't build the public transit system that will make cars obsolete.
When you "vote with your wallet" all you can do about animal cruelty and habitat loss is eat less meat. When you "vote with your wallet" all you can do about high drug prices is "shop around for a bargain." When you vote with your wallet, all you can do when your bank forecloses on your home is "choose your next lender more carefully."
Most importantly, when you vote with your wallet, you cast a ballot in an election that the people with the thickest wallets always win. No wonder those people have spent so long teaching us that we can't trust our neighbors, that there is no such thing as society, that we can't have nice things. That there is no alternative.
The commercial surveillance industry really wants you to believe that they're good at convincing people of things, because that's a good way to sell advertising. But claims of mind-control are pretty goddamned improbable – everyone who ever claimed to have managed the trick was lying, from Rasputin to MK-ULTRA:
https://pluralistic.net/HowToDestroySurveillanceCapitalism
Rather than seeing these platforms as convincing people of things, we should understand them as discovering and reinforcing the ideology that people have been driven to by material conditions. Platforms like Facebook show us to one another, let us form groups that can imperfectly fill in for the solidarity we're desperate for after 40 years of "no such thing as society."
The most interesting thing about "egging my car" slop is that it reveals that so many of us are convinced of two contradictory things: first, that everyone else is a monster who will turn on you for the pettiest of reasons; and second, that we're all the kind of people who would stick up for the victims of those monsters.
Tumblr media
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.
Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/29/hobbesian-slop/#cui-bono
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
308 notes · View notes
vixellaivy · 16 days ago
Text
hi i strike again :)
Things i did to make U.A better (part 3)
The U.A Entrance Exam
because the show tells me U.A is notoriously hard to pass but i don't believe it. (why that fucking pervert passed??)
so i created a new one. ;)
The new U.A. entrance exam is a two-phase, high-risk simulation designed to test applicants not only on their abilities, but on their instincts, character, and response to real-world crisis.
At first, candidates participate in what appears to be a traditional entrance trial, often a combat course, obstacle run, or rescue mission. It is structured, scored, and monitored. This is the decoy phase, intentionally designed to make students believe it’s the full exam.
However, once the first phase ends and applicants believe they’ve completed the test, the environment is abruptly disrupted by an unexpected, large-scale emergency. This could be a villain attack, a natural disaster, a sabotage event, or a moral dilemma, all randomized, unscripted, and unique to each batch.
This is the true exam.
Applicants are assessed on:
* Initiative and leadership
* Quick thinking and emotional control
* Moral integrity
* Instinctive heroism
Not strength alone.
so my baby shinso could pass
The simulation is randomized and different for each group.
It utilizes:
* Professional actors (often Pro Heroes or upperclassmen)
* Artificial intelligence systems
* Dynamic, destructible environments
All to simulate a real-life crisis with maximum unpredictability.
Key features:
* No scoreboards, rankings, or explicit instructions.
* Students are observed on decision-making, initiative, morality, and response under pressure not power level.
* Success is not based on defeating a threat, but demonstrating authentic hero instinct and leadership without external validation.
* Failure to act, reckless behavior, or prioritizing personal gain results in immediate disqualification.
Participants are not informed that the scenario is the exam until it concludes.
Passing is definitely rare now
Criteria
Instinctive Heroism – 20%
Moral Judgment Under Pressure – 15%
Adaptability & Resourcefulness– 15%
Leadership & Teamwork – 15%
Emotional Control – 10%
Situational Awareness – 10%
Resilience– 10%
Ethical Consistency– 5%
U.A Clubs
This was mentioned once in the show. But i liked the idea so here are my proposals
(if you have more ideas tell me)
Hero-Focused Clubs
-Clubs that enhance hero skills outside of class.
1. Hero Costume Fashion Club
Designs and tailors hero suits, studies iconic hero looks, and creates costumes.
> Perfect for Support Course x Hero Course collabs!
2. Villain Psychology Watch
Students analyze villain behavior, motivations, and strategies.
> Often invited to hero analysis discussions or debate ethics.
3. Battle Theater Club
Performs mock combat scenarios with dramatization — half theater, half sparring.
> Improves improvisation, situational thinking, and audience control.
4. Hero Analysis Club
Analyzes hero tactics, combat styles, and quirk matchups using real footage and simulations.
> Sharpens strategy, battlefield awareness, and quick decision-making.
Support + Business+ General Course Linked Clubs
5. Gadgeteers Union
Student inventors tinker with new hero tools and support gear. Occasionally make great tech that makes dorm life greater
> Hosts annual "Gadget Games" to show off tech.
6. Hero Startup Society
Business students create "fake" hero agencies, products, and brand plans for hero students
> Simulates managing an agency, including PR, interns, and finance.
8. Media & Broadcast Club
Runs the U.A. student news, podcasts, and hero festival streams.
> Sometimes interviews heroes or live-reports rescues (with permission).
Physical & Training Clubs
9. Quirk Fitness Club
Specialized gym routines for quirk types - from strength to stamina to regulation
>Recovery Girl drops by sometimes to advise safe limits.
10. Martial Arts Collective
Students share combat styles and train with & without quirks
>Includes Aikido, Judo, Karate, Capoeira, etc.
11. Parkour & Urban Movement Club'
Students explore mobility through jumps, flips, and climbing.
>Great for mobility-based quirks or rescue tactics
Creative & Chill Clubs (because U.A. kids need downtime too)
12. Sketch & Comics Society
Students draw their favorite heroes, comics, or even create original hero manga
>Low pressure, cozy atmosphere - fan of Midnight? She's popular here.
13. Music & Dance Club
Relax, jam, dance, or write hero-themed music. They play during the Sports Festival booths.
>Great for stress relief, social bonding, or school events.'
Academic / High-Value Clubs
14. Debate & Ethics Club
Hero students debate moral dilemmas, quirk laws, and political changes.
>Also holds mock trials and hero-vs-villain defense debates
15. Tactical Games Club
Chess, shogi, card battles, or video games that simulate strategy and battlefield control
>Helps develop calm thinking and foresight under pressure.
16. Language & Diplomacy Club'
Practice different languages, etiquette, and cultural customs for international hero work.
>Also stages model "Hero Summits" like mini-UN meetings
that's all for now ;)
75 notes · View notes
reasonsforhope · 1 year ago
Text
The Surucuá community in the state of Pará is the first to receive an Amazonian Creative Laboratory, a compact mobile biofactory designed to help kick-start the Amazon’s bioeconomy.
Instead of simply harvesting forest-grown crops, traditional communities in the Amazon Rainforest can use the biofactories to process, package and sell bean-to-bar chocolate and similar products at premium prices.
Having a livelihood coming directly from the forest encourages communities to stay there and protect it rather than engaging in harmful economic activities in the Amazon.
The project is in its early stages, but it demonstrates what the Amazon’s bioeconomy could look like: an economic engine that experts estimate could generate at least $8 billion per year.
In a tent in the Surucuá community in the Brazilian Amazonian state of Pará, Jhanne Franco teaches 15 local adults how to make chocolate from scratch using small-scale machines instead of grinding the cacao beans by hand. As a chocolatier from another Amazonian state, Rondônia, Franco isn’t just an expert in cocoa production, but proof that the bean-to-bar concept can work in the Amazon Rainforest.
“[Here] is where we develop students’ ideas,” she says, gesturing to the classroom set up in a clearing in the world’s greatest rainforest. “I’m not here to give them a prescription. I want to teach them why things happen in chocolate making, so they can create their own recipes,” Franco tells Mongabay.
The training program is part of a concept developed by the nonprofit Amazônia 4.0 Institute, designed to protect the Amazon Rainforest. It was conceived in 2017 when two Brazilian scientists, brothers Carlos and Ismael Nobre, started thinking of ways to prevent the Amazon from reaching its impending “tipping point,” when deforestation turns the rainforest into a dry savanna.
Their solution is to build a decentralized bioeconomy rather than seeing the Amazon as a commodity provider for industries elsewhere. Investments would be made in sustainable, forest-grown crops such as cacao, cupuaçu and açaí, rather than cattle and soy, for which vast swaths of the forest have already been cleared. The profits would stay within local communities.
A study by the World Resources Institute (WRI) and the New Climate Economy, published in June 2023, analyzed 13 primary products from the Amazon, including cacao and cupuaçu, and concluded that even this small sample of products could grow the bioeconomy’s GDP by at least $8 billion per year.
To add value to these forest-grown raw materials requires some industrialization, leading to the creation of the Amazonian Creative Laboratories (LCA). These are compact, mobile and sustainable biofactories that incorporate industrial automation and artificial intelligence into the chocolate production process, allowing traditional communities to not only harvest crops, but also process, package and sell the finished products at premium prices.
The logic is simple: without an attractive income, people may be forced to sell or use their land for cattle ranching, soy plantations, or mining. On the other hand, if they can make a living from the forest, they have an incentive to stay there and protect it, becoming the Amazon’s guardians.
“The idea is to translate this biological and cultural wealth into economic activity that’s not exploitative or harmful,” Ismael Nobre tells Mongabay."
-via Mongabay News, January 2, 2024
286 notes · View notes
probablyasocialecologist · 11 months ago
Text
The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying. Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate. It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something. Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes. There is a context in which the dark spots are sufficient; birds are less likely to eat a butterfly that has them, and the butterfly doesn’t really care why it’s not being eaten, as long as it gets to live. But there is a big difference between a butterfly and a predator that poses a threat to a bird. A person using generative A.I. to help them write might claim that they are drawing inspiration from the texts the model was trained on, but I would again argue that this differs from what we usually mean when we say one writer draws inspiration from another. Consider a college student who turns in a paper that consists solely of a five-page quotation from a book, stating that this quotation conveys exactly what she wanted to say, better than she could say it herself. Even if the student is completely candid with the instructor about what she’s done, it’s not accurate to say that she is drawing inspiration from the book she’s citing. The fact that a large language model can reword the quotation enough that the source is unidentifiable doesn’t change the fundamental nature of what’s going on. As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.
31 August 2024
106 notes · View notes
spectralpixelsredone · 18 days ago
Text
Tumblr media
How L.O.V outsmarted an entire country of Heroes
The League of Villains (LOV), led by Tomura Shigaraki and including the Vanguard Action Squad, outsmarted an entire country of heroes in My Hero Academia through a combination of strategic planning, exploiting systemic weaknesses, and leveraging their unique quirks and motivations. Below, I’ll break down how they achieved this, blending canon reasons from the manga/anime with some speculative analysis based on their actions and the broader context of the story.
Canon Reasons for the LOV/Vanguard Action Squad’s Success
Exploitation of Hero Society’s Complacency:
Canon Evidence: The LOV capitalized on the overconfidence of hero society, particularly during the Training Camp Arc (Season 3, manga chapters 70–83). Heroes, especially those at U.A. High, underestimated the threat posed by the LOV, assuming their superior numbers and training would prevent any significant villainous activity. The Vanguard Action Squad’s attack on the training camp was a calculated move to disrupt this sense of security.
Details: The heroes were unprepared for a coordinated assault on a remote location, believing it was a low-risk environment. The LOV’s ability to infiltrate and execute a precise strike demonstrated their understanding of hero society’s reliance on predictable systems and schedules.
Strategic Planning and Intelligence Gathering:
Canon Evidence: The LOV, under Shigaraki’s leadership and All For One’s guidance, conducted thorough reconnaissance. They obtained critical information about U.A.’s training camp location and schedule, likely through spies or hacking (manga chapter 72). This allowed them to strike at a moment when the students were vulnerable and separated from professional heroes.
Details: The Vanguard Action Squad was specifically assembled with members whose quirks were suited for chaos and disruption (e.g., Dabi’s fire, Muscular’s strength, Moonfish’s blade-teeth). Their plan to kidnap Bakugo was a targeted strike to destabilize U.A. and exploit his volatile personality, showing a deep understanding of their targets.
Psychological Warfare and Misdirection:
Canon Evidence: Shigaraki’s leadership evolved to focus on sowing fear and division. The attack on the training camp wasn’t just about physical damage but also about undermining public trust in heroes (/or hero society (manga chapter 83). By targeting students and kidnapping Bakugo, they aimed to expose U.A.’s vulnerabilities, which would shake public confidence in heroes like All Might.
Details: The LOV’s actions were designed to create a spectacle. The media frenzy following the attack amplified their impact, as seen in news reports discussing the failure of heroes to protect their students (anime Season 3, Episode 14). This psychological blow was as critical as the physical one, aligning with Shigaraki’s goal to dismantle the status quo.
Diverse and Powerful Quirks:
Canon Evidence: The Vanguard Action Squad’s members had quirks that gave them a tactical edge. For example, Kurogiri’s Warp Gate quirk allowed for rapid infiltration and escape (manga chapter 73), bypassing hero defenses. Spinner and Magne’s quirks, combined with Dabi’s destructive flames, created chaos that overwhelmed the heroes and students.
Details: The LOV’s ability to coordinate their quirks effectively (e.g., Toga’s blood-based tracking, Twice’s cloning for distraction) made their small group disproportionately effective against a larger, less cohesive force.
All For One’s Backing:
Canon Evidence: The LOV’s operations were supported by All For One, who provided resources, Nomus (artificial super-powered beings), and strategic oversight (manga chapters 89–90). His influence gave the LOV access to advanced technology and quirks that heroes couldn’t anticipate.
Details: The Nomus deployed during the attack were a significant threat, distracting pro heroes like Vlad King and Aizawa, allowing the Vanguard to focus on their objective (kidnapping Bakugo). All For One’s long-term planning ensured the LOV had the tools to execute complex operations.
Speculative Analysis: How They Outsmarted the Heroes
Exploiting Systemic Weaknesses:
Hero society in My Hero Academia is heavily bureaucratic and reliant on a few top heroes like All Might. The LOV likely recognized that smaller, targeted attacks could expose these structural flaws. By hitting a remote training camp, they avoided direct confrontation with top-tier heroes while still achieving a high-impact outcome. This suggests a level of strategic foresight, possibly informed by All For One’s decades of experience in the underworld.
Shigaraki’s Growing Tactical Acumen:
While Shigaraki starts as an impulsive leader, his growth under All For One’s mentorship (manga chapters 68–70) shows him learning to think several steps ahead. His decision to target Bakugo specifically was a calculated move, possibly based on observing Bakugo’s behavior during the Sports Festival (manga chapter 44), where his aggression made him a potential recruit for the LOV’s ideology. This indicates Shigaraki’s ability to exploit psychological profiles, a skill that likely grew as he led more operations.
Small, Agile Team vs. Large, Bureaucratic System:
The Vanguard Action Squad’s small size allowed for flexibility and speed, contrasting with the heroes’ slower, more bureaucratic response. Heroes were spread thin across the country, and the LOV likely anticipated that mobilizing a large hero force to a remote area would take time, giving them a window to act. This speculative advantage mirrors guerrilla warfare tactics, where a smaller force uses surprise and mobility to outmaneuver a larger one.
Underestimation of Shigaraki’s Leadership:
Heroes initially viewed Shigaraki as a disorganized thug (e.g., All Might’s comments in manga chapter 11). This underestimation allowed the LOV to operate under the radar, building their capabilities without drawing full attention until it was too late. The heroes’ focus on All For One as the primary threat blinded them to Shigaraki’s growing competence, a miscalculation the LOV exploited.
Conclusion
The LOV and Vanguard Action Squad outsmarted hero society by exploiting complacency, conducting meticulous planning, using psychological warfare, leveraging powerful quirks, and benefiting from All For One’s resources. Their success stemmed from targeting vulnerabilities in hero society’s structure, using a small but effective team, and capitalizing on the element of surprise. Shigaraki’s evolving leadership and the LOV’s willingness to take bold risks allowed them to achieve outsized impact against a numerically superior but overconfident opponent.
20 notes · View notes
pupmkincake2000 · 29 days ago
Text
I've been thinking about possible hankcon AUs I've never actually seen... Just imagine
Noir Detective AU
Connor as a young, brilliant but uptight federal agent. Hank as a grizzled, burned-out detective who doesn’t want a partner.
They’re forced to work together on a case neither of them wants. Their clashing worldviews make everything harder, until the case breaks open and so does their emotional distance. Classic “enemies to reluctant partners to… maybe something else.”
Fantasy AU
Connor as a scholar, mage apprentice, or monastery scribe with rigid discipline. Hank as a fallen knight, exiled warrior, or mercenary with a drinking problem.
They’re sent on a quest together, maybe to defeat an ancient curse, guard a village, or find a missing artifact. Connor wants structure. Hank wants everyone to leave him alone. Cue arguments, slow trust, and long nights by firelight.
Dark Medieval AU
Connor as a monster hunter, emotionally detached, trained for efficiency. Hank as an herbalist, woodsman, or ex-soldier who understands people, not magic.
They travel together to stop dark creatures, cults, and political rot. Connor is quiet. Hank is blunt. Their growing bond becomes a question of who protects whom, and what makes someone human.
(it could be vice versa as well. Hank as a monster hunter, Connor as a herbalist etc. which fits them even more)
Mafia/Narcotics AU
Connor as an undercover cop working inside a crime syndicate. Hank as a disgraced former detective who knows how the system really works.
Connor is told to find dirt on Hank. But the more they interact, the more he realizes Hank isn’t corrupt, just broken. Dangerous secrets, moral gray areas, and blurred lines follow.
Academic AU
Hank as a university professor (literature, philosophy, criminology etc.) Connor as a grad student or research assistant with a spotless academic record and zero social ease.
They work on a project together, forced into the same orbit. Their connection isn’t some cliché “hot for teacher” thing, it’s two people, both isolated in different ways, learning how to connect.
Corporate Thriller AU
Connor as a cybersecurity analyst or forensic accountant. Hank as an internal investigator or ex-hacker turned security consultant (or something that just fits him more).
Together, they uncover corporate espionage or AI-related corruption. Cold efficiency meets jaded intuition. Mutual frustration turns into mutual respect.
Post-Apocalyptic AU
Connor as a medic, mechanic, or survivalist — calm and precise. Hank as a former cop, now one of the few who can keep people together.
They survive together, in a train convoy, forest outpost, or walled city. It’s not about dramatic declarations. It’s about sharing food, fixing things, and staying human in a broken world.
Historical AU (1940s–1970s)
Hank as a hardboiled PI. Connor as a government lawyer, war veteran, or journalist.
They team up on a case that exposes systemic injustice and in the process, confront their own alienation. Queer-coded, quiet, full of unsaid things and cigarette smoke.
Sci-Fi AU (but no androids)
Connor as a genetically engineered human, not artificial, but grown for perfection. Hank as military or intelligence assigned to monitor him.
Connor struggles with identity, not programming. Hank sees a person when others see an asset. Their bond builds not on rebellion, but choice and learning what “being human” really means.
And of course, they fall in love with each other in each of these AUs and live happily ever after! Or... at leas quite happily.
20 notes · View notes
beardedmrbean · 3 months ago
Text
Nearly two months after hundreds of prospective California lawyers complained that their bar exams were plagued with technical problems and irregularities, the state's legal licensing body has caused fresh outrage by admitting that some multiple-choice questions were developed with the aid of artificial intelligence.
The State Bar of California said in a news release Monday that it will ask the California Supreme Court to adjust test scores for those who took its February bar exam.
But it declined to acknowledge significant problems with its multiple-choice questions — even as it revealed that a subset of questions were recycled from a first-year law student exam, while others were developed with the assistance of AI by ACS Ventures, the State Bar’s independent psychometrician.
"The debacle that was the February 2025 bar exam is worse than we imagined," said Mary Basick, assistant dean of academic skills at UC Irvine Law School. "I'm almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable."
After completing the exam, Basick said, some test takers complained that some of the questions felt as if they were written by AI.
"I defended the bar,” Basick said. “'No way! They wouldn't do that!’"
Using AI-developed questions written by non-legally-trained psychometricians represented "an obvious conflict of interest," Basick argued, because "these are the same psychometricians tasked with establishing that the questions are valid and reliable."
"It's a staggering admission," agreed Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation.
"The State Bar has admitted they employed a company to have a non-lawyer use AI to draft questions that were given on the actual bar exam," she said. "They then paid that same company to assess and ultimately approve of the questions on the exam, including the questions the company authored."
The State Bar, which is an administrative arm of the California Supreme Court, said Monday that the majority of multiple-choice questions were developed by Kaplan Exam Services, a company it contracted with last year as it sought to save money.
According to a recent presentation by the State Bar, 100 of the 171 scored multiple-choice questions were made by Kaplan and 48 were drawn from a first-year law students exam. A smaller subset of 23 scored questions were made by ACS Ventures, the State Bar’s psychometrician, and developed with artificial intelligence.
"We have confidence in the validity of the [multiple-choice questions] to accurately and fairly assess the legal competence of test-takers," Leah Wilson, the State Bar’s executive director, said in a statement.
On Tuesday, a spokesperson for the State Bar told The Times that all questions — including the 29 scored and unscored questions from the agency's independent psychometrician that were developed with the assistance of AI — were reviewed by content validation panels and subject matter experts ahead of the exam for factors including legal accuracy, minimum competence and potential bias.
When measured for reliability, the State Bar told The Times, the combined scored multiple-choice questions from all sources — including AI — performed "above the psychometric target of 0.80."
The State Bar also dismissed the idea of a conflict of interest.
"The process to validate questions and test for reliability is not a subjective one," the State Bar said, "and the statistical parameters used by the psychometrician remain the same regardless of the source of the question."
Alex Chan, an attorney who serves as chair of the State Bar's Committee of Bar Examiners, told The Times that only a small subset of questions used AI — and not necessarily to create the questions.
"The professors are suggesting that we used AI to draft all of the multiple choice questions, as opposed to using AI to vet them," Chan said. "That is not my understanding."
Chan noted that the California Supreme Court urged the State Bar in October to review "the availability of any new technologies, such as artificial intelligence, that might innovate and improve upon the reliability and cost-effectiveness of such testing."
"The court has given its guidance to consider the use of AI, and that's exactly what we're going to do," Chan said.
But a spokesperson for California's highest court said Tuesday that justices found out only this week that the State Bar had utilized AI in developing exam questions.
"Until yesterday’s State Bar press release, the court was unaware that AI had been used to draft any of the multiple-choice questions," a spokesperson said in a statement.
Last year, as the State Bar faced a $22-million deficit in its general fund, it decided to cut costs by ditching the National Conference of Bar Examiners’ Multistate Bar Examination, a system used by most states, and move to a new hybrid model of in-person and remote testing. It cut an $8.25-million deal with test prep company Kaplan Exam Services to create test questions and hired Meazure Learning to administer the exam.
There were multiple problems with the State Bar’s rollout of the new exams. Some test takers reported they were kicked off the online testing platforms or experienced screens that lagged and displayed error messages. Others complained the multiple-choice test questions had typos, consisted of nonsense questions and left out important facts.
The botched exams prompted some students to file a federal lawsuit against Meazure Learning. Meanwhile, California Senate Judiciary Chair Thomas J. Umberg (D-Santa Ana) called for an audit of the State Bar and the California Supreme Court directed the agency to revert to traditional in-person administering of July bar exams.
But the State Bar is pressing forward with its new system of multiple-choice questions — even though some academic experts have repeatedly flagged problems with the quality of the February exam questions.
"Many have expressed concern about the speed with which the Kaplan questions were drafted and the resulting quality of those questions," Basick and Moran wrote April 16 in a public comment to the Committee of Bar Examiners. "The 50 released practice questions — which were heavily edited and re-released just weeks before the exam — still contain numerous errors. This has further eroded our confidence in the quality of the questions."
Historically, Moran said, exam questions written by the National Conference of Bar Examiners have taken years to develop.
Reusing some of the questions from the first-year law exam raised red flags, Basick said. An exam to figure out if a person had learned enough in their first year of law school is different from one that determines whether a test taker is minimally competent to practice law, she argued.
"It's a much different standard," she said. "It's not just, 'Hey, do you know this rule?' It is 'Do you know how to apply it in a situation where there's ambiguity, and determine the correct course of action?'"
Also, using AI and recycling questions from a first-year law exam represented a major change to bar exam preparation, Basick said. She argued such a change required a two-year notice under California's Business and Professions Code.
But the State Bar told The Times that the sources of the questions had not triggered that two-year notice.
"The fact there were multiple sources for the development of questions did not impact exam preparation," the State Bar said.
Basick said she grew concerned in early March when, she said, the State Bar kicked her and other academic experts off their question-vetting panels.
She said the State Bar argued that those law professors had worked with questions drafted by the National Conference of Bar Examiners in the last six months, which could raise issues of potential copyright infringement.
"Ironically, what they did instead is have non-lawyers draft questions using artificial intelligence," she said. "The place the artificial intelligence would have gotten their information from has to be the NCBE questions, because there's nothing else available. What else would artificial intelligence use?"
Ever since the February exam debacle, the State Bar has underplayed the idea that there were substantial problems with the multiple-choice questions. Instead, it has focused on the problems with Meazure Learning.
“We are scrutinizing the vendor’s performance in meeting their contractual obligations,” the State Bar said in a document that listed the problems test takers experienced and highlighted the relevant performance expectations laid out in the contract.
But critics have accused the State Bar of shifting blame — and argued it has failed to acknowledge the seriousness of the problems with multiple-choice questions.
Moran called on the State Bar to release all 200 questions that were on the test for transparency and to allow future test takers a chance to get used to the different questions. She also called on the State Bar to return to the multi-state bar exam for the July exams.
"They have just shown that they cannot make a fair test," she said.
Chan said the Committee of Bar Examiners will meet on May 5 to discuss non-scoring adjustments and remedies. But he doubted that the State Bar would release all 200 questions or revert to the National Conference of Bar Examiners exams in July.
The NCBE's exam security would not allow any form of remote testing, he said, and the State Bar's recent surveys showed almost 50% of California bar applicants want to keep the remote option.
"We're not going back to NCBE — at least in the near term," Chan said.
22 notes · View notes