kit, tessa if you know me irl, she/they, ramblings of a human, side blog is @meooooooooooooww and moots can follow me there (yes I cat keysmashed and named my blog that)
Last active 2 hours ago
Don't wanna be here? Send us removal request.
Text
dear TikTok, if you want to say something with the intent of the mushroom post, just quote the mushroom post
#said while tired and suffering the consequences of walking for 5 hours yesterday#ughhhhh#I wish I didn’t break from being outside and upright
0 notes
Text
lol what
i really enjoy that i’m experiencing dashcon 2 in the same way that my forefathers experienced the original dashcon: though sporadic bursts in chaotic photos. i have no damn clue what’s going on except the ball pit is tiny, strange aeons won the duel and ousted the muppet joker, there was a live in person kung pow penis, and someone cosplayed the children’s hospital
12K notes
·
View notes
Text
CATCHING THE BALL PIT QUEEN AT DASHCON 2
5K notes
·
View notes
Text
the way usamericans talk about their military fucks me up. thank you for what service like which service are they providing you with. they're protecting your country from what. they're fighting for the freedom of whomst. why do they need to do that for and why are they doing in it in the middle east
#honestly#a lot of it is indoctrination#we’re taught (at least I was)#the us is the greatest country in the world and we need to protect that#and that choice has a lot of sacrifice involved#nowadays#I believe that people who need help should get help#everyone makes mistakes#and the US isn’t great and never has been#we bury history even as we teach it#and I now don’t like that#for context#I live and grew up in a city where democratic socialism worked#like really well#I wanted to learn about that in school!!!!#but nooooo#feed the propaganda machine
32K notes
·
View notes
Text
Happy Dashcon 2 Eve, you can only reblog this today!


9K notes
·
View notes
Text
This is Jeremy Larkin. I am tired, I am hungry, and I refuse to leave messages for whatever fucking delusion is going on inside my head! Thank you so much.
Jonathan isn’t real. Just some kind of…break from reality after that nightmare back at the Westenra building.
I am having a fucking mental breakdown, I’m stuck travelling with a woman claiming to be possessed by a ghost.
I heard what you did, in the recording. I wanted to–... fuck. Why am I even saying this? You’re not real. You’re just some idea inside my head. Why would I be thanking you?
MADDIE: You think the Foundation will know how to fix this? Work out why the Harkers are here?
JEREMY: Or prove we’re having a shared mental breakdown.
You must be going as mad as I am, Jonathan, if you think the thing in there did anything like-
MADDIE: Recording again?
JEREMY: Keeping myself sane, if you must know. Any luck?
I keep feeling these sudden flushes of panic, deep in my chest, and I don’t know if it’s Jonathan trying to get to the surface or my own insanity catching up with me.
So, uh. Holmwood Foundation theory/headcanon: Jeremy really did have a mental breakdown at some point before the series starts, probably due to trauma from the Sister's attack on him and Tom, and J3 was more concerned with what this would do to the family's reputation than actually... caring for his son. This is why their relationship is Like That. (And possibly also why Jeremy got kicked out of boarding school.)
44 notes
·
View notes
Text

255K notes
·
View notes
Text
"empowering women by sending katy perry to space for 2 minutes" shut the fuck up. samantha cristoforetti was the first female commander of the international space station and she became an astronaut because of star trek. and there is a real chance she is a kirk/spock shipper
#my 6th grade science teacher#was friends with Christa McAuliffe#there is a photo of her hanging on the wall from when she visited#it’s kind of a shrine now#to an unofficial patron saint#of impossible challenges and turning points#tbh I want to use my canvas to paint her now
104K notes
·
View notes
Text
Jonathan: Count, do you want to go outside?
Dracula: no
Jonathan: why?
Dracula: the sun is a deadly laser
#dracula#the sun is a deadly lazer#i couldn’t resist#just me back on my bs#and with a new auditory stim
7 notes
·
View notes
Text
hello my name is inigo montoya. you attacked my father's OC. prepare to be utterly humiliated in artfight.
#artfight#i am so stressed#i want to just make shitposts all july#but i need to attac#i need to protecc#i must redeem myself after last year
0 notes
Text
Welcome back to Show and Tell Saturday - happy Pride my friends! What's new in your world this week?
698 notes
·
View notes
Text
A tiny update:
expectation of calming down with this method: fantasy chant detailing the long history of a world reality: everybody do the wenis, the wenis is a dance! everybody is a genius who knows it in advance! continued reality: *musings on the population of shrimp, or the vastness of tiny crustaceans*
So.
I’ve been having some anxiety issues recently and (I need to draw this scenario) the latest idea I had to manage this is to learn a bunch of cool monologues I like and just start reciting them in place of [insert destructive self-talk here].
Mildly related, but does anyone have a somewhat complete version of the Dawnchant?
#the wenis is a dance#irl stuff#i can't believe it worked#i eventually ended up googling shrimp ecology#there are trillions of them#we physically cannot count the number of shrimp#there are more shrimp than stars#probably#in the grand scheme of life#you would have to do something drastic to affect the wild shrimp population#they don't know about your individual problems#or your issues#they are tiny creatures who have no idea how to be anything but happy#and yet are so significant as a whole#sorry for the ramble#and for sounding like hank green
1 note
·
View note
Text
So.
I’ve been having some anxiety issues recently and (I need to draw this scenario) the latest idea I had to manage this is to learn a bunch of cool monologues I like and just start reciting them in place of [insert destructive self-talk here].
Mildly related, but does anyone have a somewhat complete version of the Dawnchant?
#the stormlight archive#dawnchant#anxitey#irl stuff#kind of#like I do want to be that person#and my coworkers would be a lot happier#just doing it for the bit
1 note
·
View note
Text
There’s a new (unreviewed draft of a) scientific article out, examining the relationship between Large Language Model (LLM) use and brain functionality, which many reporters are incorrectly claiming shows proof that ChatGPT is damaging people’s brains.
As an educator and writer, I am concerned by the growing popularity of so-called AI writing programs like ChatGPT, Claude, and Google Gemini, which when used injudiciously can take all of the struggle and reward out of writing, and lead to carefully written work becoming undervalued. But as a psychologist and lifelong skeptic, I am forever dismayed by sloppy, sensationalistic reporting on neuroscience, and how eager the public is to believe any claim that sounds scary or comes paired with a grainy image of a brain scan.
So I wanted to take a moment today to unpack exactly what the study authors did, what they actually found, and what the results of their work might mean for anyone concerned about the rise of AI — or the ongoing problem of irresponsible science reporting.
If you don’t have time for 4,000 lovingly crafted words, here’s the tl;dr.
The major caveats with this study are:
This paper has not been peer-reviewed, which is generally seen as an essential part of ensuring research quality in academia.
The researchers chose to get this paper into the public eye as quickly as possible because they are concerned about the use of LLMs, so their biases & professional motivations ought to be taken into account.
Its subject pool is incredibly small (N=54 total).
Subjects had no reason to care about the quality of the essays they wrote, so it’s hardly surprising the ones who were allowed to use AI tools didn’t try.
EEG scans only monitored brain function while writing the essays, not subjects’ overall cognitive abilities, or effort at tasks they actually cared about.
Google users were also found to utilize fewer cognitive resources and engage in less memory retrieval while writing their essays in this study, but nobody seems to hand-wring about search engines being used to augment writing anymore.
Cognitive ability & motivation were not measured in this study.
Changes in cognitive ability & motivation over time were not measured.
This was a laboratory study that cannot tell us how individuals actually use LLMs in their daily life, what the long-term effects of LLM use are, and if there are any differences in those who choose to use LLMs frequently and those who do not.
The researchers themselves used an AI model to analyze their data, so staunch anti-AI users don’t have support for there views here.
Brain-imaging research is seductive and authoritative-seeming to the public, making it more likely to get picked up (and misrepresented) by reporters.
Educators have multiple reasons to feel professionally and emotionally threatened by widespread LLM use, which influences the studies we design and the conclusions that we draw on the subject.
Students have very little reason to care about writing well right now, given the state of higher ed; if we want that to change, we have to reward slow, painstaking effort.
The stories we tell about our abilities matter. When individuals falsely believe they are “brain damaged” by using a technological tool, they will expect less of themselves and find it harder to adapt.
Head author Nataliya Kosmyna and her colleagues at the MIT Media Lab set out to study how the use of large language models (LLMs) like ChatGPT affects students’ critical engagement with writing tasks, using electroencephalogram scans to monitor their brains’ electrical activity as they were writing. They also evaluated the quality of participants’ papers on several dimensions, and questioned them after the fact about what they remembered of their essays.
Each of the study’s 54 research subjects were brought in for four separate writing sessions over a period of four months. It was only during these writing tasks that students’ brain activity was monitored.
Prior research has shown that when individuals rely upon an LLM to complete a cognitively demanding task, they devote fewer of their own cognitive resources to that task, and use less critical thinking in their approach to that task. Researchers call this process of handing over the burden of intellectually demanding activities to a large language model cognitive offloading, and there is a concern voiced frequently in the literature that repeated cognitive offloading could diminish a person’s actual cognitive abilities over time or create AI dependence.
Now, there is a big difference between deciding not to work very hard on an activity because technology has streamlined it, and actually losing the ability to engage in deeper thought, particularly since the tasks that people tend to offload to LLMs are repetitive, tedious, or unfulfilling ones that they’re required to complete for work and school and don’t otherwise value for themselves. It would be foolhardy to assume that simply because a person uses ChatGPT to summarize an assigned reading for a class that they have lost the ability to read, just as it would be wrong to assume that a person can’t add or subtract because they have used a calculator.
However, it’s unquestionable that LLM use has exploded across college campuses in recent years and rendered a great many introductory writing assignments irrelevant, and that educators are feeling the dread that their profession is no longer seen as important. I have written about this dread before — though I trace it back to government disinvestment in higher education and commodification of university degrees that dates back to Reagan, not to ChatGPT.
College educators have been treated like underpaid quiz-graders and degrees have been sold with very low barriers to completion for decades now, I have argued, and the rise of students submitting ChatGPT-written essays to be graded using ChatGPT-generated rubrics is really just a logical consequence of the profit motive that has already ravaged higher education. But I can’t say any of these longstanding economic developments have been positive for the quality of the education that we professors give out (or that it’s helped students remain motivated in their own learning process), so I do think it is fair that so many academics are concerned that widespread LLM use could lead to some kind of mental atrophy over time.
This study, however, is not evidence that any lasting cognitive atrophy has happened. It would take a far more robust, long-term study design tracking subjects’ cognitive engagement against a variety of tasks that they actually care about in order to test that.
Rather, Kosmyna and colleagues brought their 54 study participants into the lab four separate times, and assigned them SAT-style essays to write, in exchange for a $100 stipend. The study participants did not earn any grade, and having a high-quality essay did not earn them any additional compensation. There was, therefore, very little personal incentive to try very hard at the essay-writing task, beyond whatever the participant already found gratifying about it.
I wrote all about the viral study supposedly linking AI use to cognitive decline, and the problem of irresponsible, fear-mongering science reporting. You can read the full piece for free on my Substack.
#life has nuance#my main issue with AI#is we’re hyping it up too much within examining the consequences#and like#I want to keep an open mind here#but tbh#maybe we should#i don’t know#make more efficient technology overall?#make computers that don’t need as much cooling?#actually make ways for artists to opt out of data collection?#rethink how we see education????#idk#maybe my AI issues are symptoms of structural problems#ok who said that
2K notes
·
View notes