Hello there! I go by many names, but when in doubt, call me Coelpts! 20+ times around the sun. Mirrorpronomial- Please use your pronouns for me! More About Me and My Links
Don't wanna be here? Send us removal request.
Text
Taylor Swift is literally Elon Musk for liberal women
11K notes
·
View notes
Text
Transcript: "Born in the darkness to live in the darkness, that's the ninja's creed!"
Please do not spoil who the character or voice actor is in the replies or reblogs. Thank you
I am on vacation! Answers may not be posted right away--please remain patient.
#of course the one quote youd never hear because its mixed in with like a half dozen other sound effects#but i know who you are. i can never forget
79 notes
·
View notes
Text
Transcript: "Eureka!"
Please do not spoil who the character or voice actor is in the replies or reblogs. Thank you
I am on vacation! Answers may not be posted right away--please remain patient.
288 notes
·
View notes
Text
Transcript: "[sigh] Do I gotta do everything around here?"
Please do not spoil who the character or voice actor is in the replies or reblogs. Thank you
I am on vacation! Answers may not be posted right away--please remain patient.
79 notes
·
View notes
Text
The origin of "chef" as an English word to mean "one who cooks" comes from the French term chef de cuisine, a title still used in restaurants to this day to denote a cook in a managerial role. What makes this an interesting bit of etymology is that while in English, "chef" is only ever used in the context of cooking and restaurants, in French chef just means "chief", "head" or "leader" and there are many common titles in French that include this word. A Head of State is the Chef d'Etat, a musical conductor is the chef d'orchestre, a business owner is the chef d'entreprise, and so on and so forth. So with this in mind, one could make the argument that as a gender neutral term denoting authority, "chef" could potentially have utility in BDSM as a
8K notes
·
View notes
Text
Transcript: "I can't get over how brave you were to destroy all those Securitrons at the fort, you know? Its just gonna make everything so much more, uhhhhhh…challenging! Yeah! …Challenging!"
Please do not spoil who the character or voice actor is in the replies or reblogs. Thank you
I am on vacation! Answers may not be posted right away--please remain patient.
111 notes
·
View notes
Text
Bought my uncle a burger and milkshake in exchange for letting me disrupt the holiest day of the week, NFL Sunday Football, so I could install a Pi-hole and free the household of ads...the thing abt the specific boomers I live with is they told me not to trust people on the Internet but they do not understand the algorithm or online advertising and think that Facebook has their best interests at heart. And every time I have tried to explain to them that no, blorbo from my dashboard is not selling my kidneys on the dark web but Google from your capitalism is definitely selling your web searches to every advertising company on the planet, they think I am paranoid. How could their personal friend Mark Zuckerberg want anything bad to happen to them etc. I am fighting battles I did not know existed!!!
34K notes
·
View notes
Text
Black people in the U.S. are seven times more likely to be falsely convicted of a serious crime like murder than white people, according to a new report published Tuesday by the National Registry of Exonerations. The finding is based on an analysis of exonerations for serious crimes in the U.S. over the last four decades, which found that Black people make up less than 14% of the U.S. population but account for 53% of exonerations in the country.
3K notes
·
View notes
Text
There’s a new (unreviewed draft of a) scientific article out, examining the relationship between Large Language Model (LLM) use and brain functionality, which many reporters are incorrectly claiming shows proof that ChatGPT is damaging people’s brains.
As an educator and writer, I am concerned by the growing popularity of so-called AI writing programs like ChatGPT, Claude, and Google Gemini, which when used injudiciously can take all of the struggle and reward out of writing, and lead to carefully written work becoming undervalued. But as a psychologist and lifelong skeptic, I am forever dismayed by sloppy, sensationalistic reporting on neuroscience, and how eager the public is to believe any claim that sounds scary or comes paired with a grainy image of a brain scan.
So I wanted to take a moment today to unpack exactly what the study authors did, what they actually found, and what the results of their work might mean for anyone concerned about the rise of AI — or the ongoing problem of irresponsible science reporting.
If you don’t have time for 4,000 lovingly crafted words, here’s the tl;dr.
The major caveats with this study are:
This paper has not been peer-reviewed, which is generally seen as an essential part of ensuring research quality in academia.
The researchers chose to get this paper into the public eye as quickly as possible because they are concerned about the use of LLMs, so their biases & professional motivations ought to be taken into account.
Its subject pool is incredibly small (N=54 total).
Subjects had no reason to care about the quality of the essays they wrote, so it’s hardly surprising the ones who were allowed to use AI tools didn’t try.
EEG scans only monitored brain function while writing the essays, not subjects’ overall cognitive abilities, or effort at tasks they actually cared about.
Google users were also found to utilize fewer cognitive resources and engage in less memory retrieval while writing their essays in this study, but nobody seems to hand-wring about search engines being used to augment writing anymore.
Cognitive ability & motivation were not measured in this study.
Changes in cognitive ability & motivation over time were not measured.
This was a laboratory study that cannot tell us how individuals actually use LLMs in their daily life, what the long-term effects of LLM use are, and if there are any differences in those who choose to use LLMs frequently and those who do not.
The researchers themselves used an AI model to analyze their data, so staunch anti-AI users don’t have support for there views here.
Brain-imaging research is seductive and authoritative-seeming to the public, making it more likely to get picked up (and misrepresented) by reporters.
Educators have multiple reasons to feel professionally and emotionally threatened by widespread LLM use, which influences the studies we design and the conclusions that we draw on the subject.
Students have very little reason to care about writing well right now, given the state of higher ed; if we want that to change, we have to reward slow, painstaking effort.
The stories we tell about our abilities matter. When individuals falsely believe they are “brain damaged” by using a technological tool, they will expect less of themselves and find it harder to adapt.
Head author Nataliya Kosmyna and her colleagues at the MIT Media Lab set out to study how the use of large language models (LLMs) like ChatGPT affects students’ critical engagement with writing tasks, using electroencephalogram scans to monitor their brains’ electrical activity as they were writing. They also evaluated the quality of participants’ papers on several dimensions, and questioned them after the fact about what they remembered of their essays.
Each of the study’s 54 research subjects were brought in for four separate writing sessions over a period of four months. It was only during these writing tasks that students’ brain activity was monitored.
Prior research has shown that when individuals rely upon an LLM to complete a cognitively demanding task, they devote fewer of their own cognitive resources to that task, and use less critical thinking in their approach to that task. Researchers call this process of handing over the burden of intellectually demanding activities to a large language model cognitive offloading, and there is a concern voiced frequently in the literature that repeated cognitive offloading could diminish a person’s actual cognitive abilities over time or create AI dependence.
Now, there is a big difference between deciding not to work very hard on an activity because technology has streamlined it, and actually losing the ability to engage in deeper thought, particularly since the tasks that people tend to offload to LLMs are repetitive, tedious, or unfulfilling ones that they’re required to complete for work and school and don’t otherwise value for themselves. It would be foolhardy to assume that simply because a person uses ChatGPT to summarize an assigned reading for a class that they have lost the ability to read, just as it would be wrong to assume that a person can’t add or subtract because they have used a calculator.
However, it’s unquestionable that LLM use has exploded across college campuses in recent years and rendered a great many introductory writing assignments irrelevant, and that educators are feeling the dread that their profession is no longer seen as important. I have written about this dread before — though I trace it back to government disinvestment in higher education and commodification of university degrees that dates back to Reagan, not to ChatGPT.
College educators have been treated like underpaid quiz-graders and degrees have been sold with very low barriers to completion for decades now, I have argued, and the rise of students submitting ChatGPT-written essays to be graded using ChatGPT-generated rubrics is really just a logical consequence of the profit motive that has already ravaged higher education. But I can’t say any of these longstanding economic developments have been positive for the quality of the education that we professors give out (or that it’s helped students remain motivated in their own learning process), so I do think it is fair that so many academics are concerned that widespread LLM use could lead to some kind of mental atrophy over time.
This study, however, is not evidence that any lasting cognitive atrophy has happened. It would take a far more robust, long-term study design tracking subjects’ cognitive engagement against a variety of tasks that they actually care about in order to test that.
Rather, Kosmyna and colleagues brought their 54 study participants into the lab four separate times, and assigned them SAT-style essays to write, in exchange for a $100 stipend. The study participants did not earn any grade, and having a high-quality essay did not earn them any additional compensation. There was, therefore, very little personal incentive to try very hard at the essay-writing task, beyond whatever the participant already found gratifying about it.
I wrote all about the viral study supposedly linking AI use to cognitive decline, and the problem of irresponsible, fear-mongering science reporting. You can read the full piece for free on my Substack.
1K notes
·
View notes
Text
My take is that if you’re going to break up a happy healthy committed straight ship to make room for a gay one (or vice versa, but lbr the former happens more often in fandom) then you HAVE to keep the characters’ insanity for each other intact. Like ok yes sure we can handwave “it was a different time and we were younger and now that things are settled it’s for the best that we split” sure whatever BUT! I will not accept these guys becoming ambivalent to each other. You MUST maintain the fact that they would absolutely kill someone if the other asks. if you take away the romantic attraction then the years of history and emotion remain and now instead of kissing each other the blorbos just have to be really intense about their ex’s health and safety instead.
2K notes
·
View notes
Text
i think one of the most important things you learn about making connections with others is that a significant portion of the time people just do not know theyre doing what theyre doing
189K notes
·
View notes
Text
if you're in Europe PLEASE consider signing the Stop Destroying Games initiative. the deadline is July 31st 2025. i've posted about it before; it aims to create legislation for publishers to stop killing the games you pay for and to provide an end-of-life plan for live-service products. thank you!!!
6K notes
·
View notes
Text
I'm keeping an eye out for heat stroke in my area and I can't figure out what a full body flush would look like on dark skin since all the pictures are just fake training pictures. Anyone have video/pics of a heat stroke flush on black skin?
18K notes
·
View notes
Text
Its very surprising that i didnt drew some good hint of his fangs in my fanarts untill how i held this thing in my folder unfinished for like 5 days
8K notes
·
View notes