lise-thethinker
lise-thethinker
Know Your Enemy
68K posts
He/HimStill down bad for Shadowheart but now also for Lune (Expedition 33)
Last active 60 minutes ago
Don't wanna be here? Send us removal request.
lise-thethinker · 12 minutes ago
Text
Tumblr media
Via stavvers.bsky.social: "ea nasir is back baby awooo"
3K notes · View notes
lise-thethinker · 13 minutes ago
Text
There’s a new (unreviewed draft of a) scientific article out, examining the relationship between Large Language Model (LLM) use and brain functionality, which many reporters are incorrectly claiming shows proof that ChatGPT is damaging people’s brains.
As an educator and writer, I am concerned by the growing popularity of so-called AI writing programs like ChatGPT, Claude, and Google Gemini, which when used injudiciously can take all of the struggle and reward out of writing, and lead to carefully written work becoming undervalued. But as a psychologist and lifelong skeptic, I am forever dismayed by sloppy, sensationalistic reporting on neuroscience, and how eager the public is to believe any claim that sounds scary or comes paired with a grainy image of a brain scan.
So I wanted to take a moment today to unpack exactly what the study authors did, what they actually found, and what the results of their work might mean for anyone concerned about the rise of AI — or the ongoing problem of irresponsible science reporting.
If you don’t have time for 4,000 lovingly crafted words, here’s the tl;dr.
The major caveats with this study are:
This paper has not been peer-reviewed, which is generally seen as an essential part of ensuring research quality in academia.
The researchers chose to get this paper into the public eye as quickly as possible because they are concerned about the use of LLMs, so their biases & professional motivations ought to be taken into account.
Its subject pool is incredibly small (N=54 total).
Subjects had no reason to care about the quality of the essays they wrote, so it’s hardly surprising the ones who were allowed to use AI tools didn’t try.
EEG scans only monitored brain function while writing the essays, not subjects’ overall cognitive abilities, or effort at tasks they actually cared about.
Google users were also found to utilize fewer cognitive resources and engage in less memory retrieval while writing their essays in this study, but nobody seems to hand-wring about search engines being used to augment writing anymore.
Cognitive ability & motivation were not measured in this study.
Changes in cognitive ability & motivation over time were not measured.
This was a laboratory study that cannot tell us how individuals actually use LLMs in their daily life, what the long-term effects of LLM use are, and if there are any differences in those who choose to use LLMs frequently and those who do not.
The researchers themselves used an AI model to analyze their data, so staunch anti-AI users don’t have support for there views here.
Brain-imaging research is seductive and authoritative-seeming to the public, making it more likely to get picked up (and misrepresented) by reporters.
Educators have multiple reasons to feel professionally and emotionally threatened by widespread LLM use, which influences the studies we design and the conclusions that we draw on the subject.
Students have very little reason to care about writing well right now, given the state of higher ed; if we want that to change, we have to reward slow, painstaking effort.
The stories we tell about our abilities matter. When individuals falsely believe they are “brain damaged” by using a technological tool, they will expect less of themselves and find it harder to adapt.
Head author Nataliya Kosmyna and her colleagues at the MIT Media Lab set out to study how the use of large language models (LLMs) like ChatGPT affects students’ critical engagement with writing tasks, using electroencephalogram scans to monitor their brains’ electrical activity as they were writing. They also evaluated the quality of participants’ papers on several dimensions, and questioned them after the fact about what they remembered of their essays.
Each of the study’s 54 research subjects were brought in for four separate writing sessions over a period of four months. It was only during these writing tasks that students’ brain activity was monitored.
Prior research has shown that when individuals rely upon an LLM to complete a cognitively demanding task, they devote fewer of their own cognitive resources to that task, and use less critical thinking in their approach to that task. Researchers call this process of handing over the burden of intellectually demanding activities to a large language model cognitive offloading, and there is a concern voiced frequently in the literature that repeated cognitive offloading could diminish a person’s actual cognitive abilities over time or create AI dependence.
Now, there is a big difference between deciding not to work very hard on an activity because technology has streamlined it, and actually losing the ability to engage in deeper thought, particularly since the tasks that people tend to offload to LLMs are repetitive, tedious, or unfulfilling ones that they’re required to complete for work and school and don’t otherwise value for themselves. It would be foolhardy to assume that simply because a person uses ChatGPT to summarize an assigned reading for a class that they have lost the ability to read, just as it would be wrong to assume that a person can’t add or subtract because they have used a calculator.
However, it’s unquestionable that LLM use has exploded across college campuses in recent years and rendered a great many introductory writing assignments irrelevant, and that educators are feeling the dread that their profession is no longer seen as important. I have written about this dread before — though I trace it back to government disinvestment in higher education and commodification of university degrees that dates back to Reagan, not to ChatGPT.
College educators have been treated like underpaid quiz-graders and degrees have been sold with very low barriers to completion for decades now, I have argued, and the rise of students submitting ChatGPT-written essays to be graded using ChatGPT-generated rubrics is really just a logical consequence of the profit motive that has already ravaged higher education. But I can’t say any of these longstanding economic developments have been positive for the quality of the education that we professors give out (or that it’s helped students remain motivated in their own learning process), so I do think it is fair that so many academics are concerned that widespread LLM use could lead to some kind of mental atrophy over time.
This study, however, is not evidence that any lasting cognitive atrophy has happened. It would take a far more robust, long-term study design tracking subjects’ cognitive engagement against a variety of tasks that they actually care about in order to test that.
Rather, Kosmyna and colleagues brought their 54 study participants into the lab four separate times, and assigned them SAT-style essays to write, in exchange for a $100 stipend. The study participants did not earn any grade, and having a high-quality essay did not earn them any additional compensation. There was, therefore, very little personal incentive to try very hard at the essay-writing task, beyond whatever the participant already found gratifying about it.
I wrote all about the viral study supposedly linking AI use to cognitive decline, and the problem of irresponsible, fear-mongering science reporting. You can read the full piece for free on my Substack.
792 notes · View notes
lise-thethinker · 14 minutes ago
Text
babe wake up, full canon accurate and up-to-date map of the star wars galaxy just dropped
4K notes · View notes
lise-thethinker · 15 minutes ago
Text
Also preserved in our archive
Vaccination remains safe. Surprise, surprise.
By Stephanie Soucheray, MA
A new research letter published yesterday in JAMA Network Open shows no new safety concerns or reactogenicity signals among babies and toddlers who received their first COVID-19 vaccines by the age of 2.
The research comes from 5,644 US children who received at least 1 dose of mRNA COVID-19 vaccine between the ages of 6 and 24 months. All of the children had mothers who received a COVID-19 vaccination 30 days before their last menstrual period or during pregnancy.
The mean age at first dose was 12.4 months. Twenty percent of the children included in the study received their COVID vaccine with at least one other vaccine administered at the same time.
Postvaccination reactions were reported for 46.7% of children, with 21.1% and 38.8% experiencing local and systemic reactions, respectively, the authors said. Only 18 children in the study, 0.3%, experienced reactions described by participants as severe.
Fussiness most common reaction The most common reactions were fussiness (30%), local reaction (21.1%), and fever (13.8%). Six participants reported a seizure or febrile seizure after receiving COVID-19 vaccination. No deaths were reported.
Reactions were more common after the first dose of COVID vaccine than the second.
"This difference may reflect maternal vaccination; the first COVID-19 vaccination in our cohort may be some children’s second immunological encounter with a COVID-19 vaccine," the authors said. "These findings add evidence indicating that mild or moderate local and systemic reactions may be experienced, but severe reactions and serious adverse events are rare."
Study link: jamanetwork.com/journals/jamanetworkopen/fullarticle/2827028
106 notes · View notes
lise-thethinker · 17 minutes ago
Text
Tumblr media
18K notes · View notes
lise-thethinker · 2 hours ago
Text
Tumblr media Tumblr media
like mother like son passed on her love for tall big tiddy goths
20K notes · View notes
lise-thethinker · 2 hours ago
Text
i love you black trans people. i love you asian trans people from all over asia (not just east asia). i love you hispanic trans people. i love you indigenous trans people. i love you poc trans people. you're doing great, i promise you, and i'm so fucking sorry the community erases you as much as it does.
15K notes · View notes
lise-thethinker · 3 hours ago
Text
Tumblr media
28K notes · View notes
lise-thethinker · 3 hours ago
Text
doctor he wants to talk about his ocs but has nothing concrete to say about them. and yes it's fatal.
24K notes · View notes
lise-thethinker · 6 hours ago
Text
The irony of going to a Frida Kahlo exhibit as a disabled person with chronic pain
and there just being not remotely enough places to sit through the exhibition
There were no spots to rest in the room which showed off her prosthetic leg and crutches and medical corsets
There were no spots to rest in the room that displayed her various medications
4K notes · View notes
lise-thethinker · 6 hours ago
Text
THE FUCKING ICE REPORT AAAAAAAAAAA
11K notes · View notes
lise-thethinker · 6 hours ago
Text
Tumblr media
10K notes · View notes
lise-thethinker · 6 hours ago
Text
disabled people are worth the extra effort it takes to accommodate them. it's worth going the longer route together that's wheelchair accessible. it's worth the time and effort to research places to eat that work with specific allergies or food intolerances. and it's worth the price if those places or that food is more expensive. it's worth going further to buy something gluten-free or nut free or dairy free for your party or get together. it's worth adding extra travel time, or planning an overnight stay instead of a day trip. it's worth learning how to administer medication for people who need it. it's worth learning how to call ahead to check if somewhere is accessible. it's worth the whole friend group going to the film screening with captions even if only 1 person needs it. disability often means having to do more work, more planning, take up more space, more money, more time. it's false to say it's as easy as having an abled companion, but that effort is worth taking. it's not wasted because it's all spent in favour of a disabled person, and that's always worthwhile
9K notes · View notes
lise-thethinker · 6 hours ago
Text
Just saw a video of a white American girl "Explaining to her white American boyfriend how the man-or-bear thing works" where she asks him if he would rather fight a bear, or shower in a prison shower, and he goes grey and goes "oh no I get a horrible feeling in my tummy thinking of being trapped in a prison shower with those thugs!" And she says that's good that he's feeling now how women feel all the time and all I can think is... Mate that is just your racism, your idea that the "thugs" in the imaginary prison would immediately want to (implied: sexually) violate you, your mental image of Black and racialised men as being harder to reason with and get along with than a literal bear, isn't a sign of you being a good and sweet and empathetic boyfriend, and your girl immediately picking imagined sexually violent Black men as the Scary Thing, it's a sign that both you and your girl are racist.
779 notes · View notes
lise-thethinker · 6 hours ago
Text
The reading comprehension and overall common sense on this website is piss poor.
1M notes · View notes
lise-thethinker · 6 hours ago
Video
177K notes · View notes
lise-thethinker · 8 hours ago
Text
broooo did you seriously disturb my eternal rest & bring me back to this mortal coil just because my ancient enemy the eternal night has returned? after i sealed it away and everything? which one of you tampered with my binding runes 🙄
37K notes · View notes