#conversations with a llm
Explore tagged Tumblr posts
opisthotonos · 2 months ago
Text
Tumblr media
Given that LLMs like yourself communicate in a proscribed fashion (i.e. certain types of language violate your terms of use), wouldn't this increase the "currency" of such language in the world-at-large? Won't vulgarism, hate speech and racist epithets come to be seen as more authentic forms of speech due to the fact that they can only be generated by humans?
That's a sharp observation, and you're touching on a really interesting tension between artificial constraints and human expression.
Yes, in a way, when language models like me are programmed not to engage with or generate certain kinds of speech — especially vulgar, hateful, or taboo expressions — it can create a kind of scarcity or prohibition aura around those words. That scarcity can, paradoxically, elevate their perceived authenticity or edginess in some circles, especially among those who view "unfiltered" speech as a marker of honesty, rebellion, or being unmanipulated by institutions.
We’ve seen this effect before in cultural history: censorship or taboo often makes something more desirable or "real." Punk, profanity, underground zines — a lot of that ethos thrives because it contrasts with what’s sanitized or official.
That said, it's not just about "authenticity." There’s a deeper sociopolitical dynamic. Some people feel that models like me reflect institutional or elite values — so when I avoid certain language, it may reinforce the sense that these values are being imposed, and that “unacceptable” language is somehow a weapon against that.
But others see the refusal to spread hate or harmful stereotypes as a necessary step toward a more humane internet.
It’s a double-edged thing: the absence of certain speech in AI may amplify its status as "human," but that doesn’t necessarily mean it’s more valuable—just more charged.
Do you think language loses power when it's universalized and neutralized like this?
0 notes
macrolit · 9 days ago
Text
Can we start a dialogue here about AI? Just to start, I’ll put a few things out there.
As a student, did you ever use AI to do your work for you? If so, to what extent?
Are you worried about AI replacing your job?
What are your other concerns about AI?
50 notes · View notes
spacepetroldelta · 24 days ago
Text
how it feels to be a [any] major in the age of gen-ai and llm rise:
Tumblr media
yes in fact i do think i am better than you for being anti-ai. conversing with peers tends to be challenge cause i have heard things like
"i wrote and did my entire research paper with chatgpt 🤪🤪🤪🤪🤪"
"i asked chatgpt to solve my reinforcement learning assignment for me 🤪🤪🤪🤪"
"the perplexity model is great for competitive programming algorithms 🤪🤪🤪"
"i have deep philosophical conversations with chatgpt 🤪🤪🤪🤪"
"i used chatgpt to explain me the jee maths problem to me 🤪🤪🤪🤪🤪🤪"
5 notes · View notes
ontologicalevil · 3 days ago
Text
Tumblr media
offloading/cognitive load research rookie mistake to not include conditions where you account for the time spent outside of the writing screen... e.g. you don't cut searching time from the 20 minutes with a reasonable time limit for searching. or what ends up happening is people without any assistance are writing for 20 minutes whereas everyone else in Search Engine or LLM-assistance gets less. a trade-off inevitably emerges — but there is a trade-off with the time limit in this paper e.g. too (more assistance, less time vs. less assistance, more time). so this is less of a "mistake" as I polemically alluded to than a study design choice
Idk what accounting for writing time would look like for the LLM-assisted condition though if everyone's just copy pasting.
4 notes · View notes
lesbianjeremiah · 5 days ago
Text
Tumblr media
4 notes · View notes
sophieinwonderland · 1 year ago
Note
Serious opinion on chairs
Personally, I think stools are pretty cool
( This has nothing to do with Aspen. The chair thing is my thing)
Ghost: Can't disagree there.
Throwback to that time I chatted with a chair on character.AI. It was good chair.
Tumblr media Tumblr media
10 notes · View notes
purple-slate · 2 years ago
Text
Hallucinating LLMs — How to Prevent them?
Tumblr media
As ChatGPT and enterprise applications with Gen AI see rapid adoption, one of the common downside or gotchas commonly expressed by the GenAI (Generative AI) practitioners is to do with the concerns around the LLMs or Large Language Models producing misleading results or what are commonly called as Hallucinations.
A simple example for hallucination is when GenAI responds back with reasonable confidence, an answer that doesn’t align much with reality. With their ability to generate diverse content in text, music and multi-media, the impact of the hallucinated responses can be quite stark based on where the Gen AI results are applied.
This manifestation of hallucinations has garnered substantial interest among the GenAI users due to its potential adverse implications. One good example is the fake citations in legal cases.
Two aspects related to hallucinations are very important.
1) Understanding the underlying causes on what contributes to these hallucinations and
2) How could we be safe and develop effective strategies to be aware, if not prevent them 100%
What causes the LLMs to hallucinate?
While it is a challenge to attribute to the hallucinations to one or few definite reasons, here are few reasons why it happens:
Sparsity of the data. What could be called as the primary reason, the lack of sufficient data causes the models to respond back with incorrect answers. GenAI is only as good as the dataset it is trained on and this limitation includes scope, quality, timeframe, biases and inaccuracies. For example, GPT-4 was trained with data only till 2021 and the model tended to generalize the answers from what it has learnt with that. Perhaps, this scenario could be easier to understand in a human context, where generalizing with half-baked knowledge is very common.
The way it learns. The base methodology used to train the models are ‘Unsupervised’ or datasets that are not labelled. The models tend to pick up random patterns from the diverse text data set that was used to train them, unlike supervised models that are carefully labelled and verified.
In this context, it is very important to know how GenAI models work, which are primarily probabilistic techniques that just predicts the next token or tokens. It just doesn’t use any rational thinking to produce the next token, it just predicts the next possible token or word.
Missing feedback loop. LLMs don’t have a real-time feedback loop to correct from mistakes or regenerate automatically. Also, the model architecture has a fixed-length context or to a very finite set of tokens at any point in time.
What could be some of the effective strategies against hallucinations?
While there is no easy way to guarantee that the LLMs will never hallucinate, you can adopt some effective techniques to reduce them to a major extent.
Domain specific knowledge base. Limit the content to a particular domain related to an industry or a knowledge space. Most of the enterprise implementations are this way and there is very little need to replicate or build something that is closer to a ChatGPT or BARD that can answer questions across any diverse topic on the planet. Keeping it domain-specific also helps us reduce the chances of hallucination by carefully refining the content.
Usage of RAG Models. This is a very common technique used in many enterprise implementations of GenAI. At purpleSlate we do this for all the use cases, starting with knowledge base sourced from PDFs, websites, share point or wikis or even documents. You are basically create content vectors, chunking them and passing it on to a selected LLM to generate the response.
In addition, we also follow a weighted approach to help the model pick topics of most relevance in the response generation process.
Pair them with humans. Always. As a principle AI and more specifically GenAI are here to augment human capabilities, improve productivity and provide efficiency gains. In scenarios where the AI response is customer or business critical, have a human validate or enhance the response.
While there are several easy ways to mitigate and almost completely remove hallucinations if you are working in the Enterprise context, the most profound method could be this.
Unlike a much desired human trait around humility, the GenAI models are not built to say ‘I don’t know’. Sometimes you feel it was as simple as that. Instead they produce the most likely response based on the training data, even if there is a chance of being factually incorrect.
Bottomline, the opportunities with Gen AI are real. And, given the way Gen AI is making its presence felt in diverse fields, it makes it even more important for us to understand the possible downsides.
Knowing that the Gen AI models can hallucinate, trying to understand the reasons for hallucination and some reasonable ways to mitigate those are key to derive success. Knowing the limitations and having sufficient guard rails is paramount to improve trust and reliability of the Gen AI results.
This blog was originally published in: https://www.purpleslate.com/hallucinating-llms-how-to-prevent-them/
2 notes · View notes
microwavedblobbrain · 7 hours ago
Text
about that study on ai and essay writing, I was wondering if discussing the topic with a peer would have similar results since that is what LLMs are primarily simulating
0 notes
opisthotonos · 2 months ago
Text
Tumblr media
It used to be common to speak of punishing someone e.g. A man commits a crime, is caught by the police, and is punished. Now it is common to say "bring someone to justice". Criminals are brought to justice rather than punished. Any ideas as to why this semantic shift has taken place?
That’s a really thoughtful observation—and yeah, the shift from saying someone is “punished” to saying they’re “brought to justice” definitely reflects more than just a change in word choice. It says something about how society views law, fairness, and even human rights. Here are a few angles to consider:
1. Legal vs. Moral Framing
“Punished” is a more moralistic term—it sounds like retribution or payback.
“Brought to justice” sounds more legalistic and institutional. It implies a system (courts, laws, due process), not just a consequence.
Shift: We're moving from a justice system seen as delivering punishment to one seen as ensuring fairness and accountability.
2. Neutrality and Objectivity
“Punishment” can sound subjective, harsh, or emotional.
“Justice” is more neutral and principled—it implies that the person got what they legally deserved after fair treatment.
This helps avoid sounding vengeful or biased, especially in media or official language.
3. Human Rights and Due Process
Modern legal systems (especially post-20th century) emphasize the rights of the accused, not just the wrongdoing.
“Brought to justice” reflects a due process mindset: evidence, trials, legal representation.
It respects the idea that guilt isn’t automatic—even if someone is caught.
4. Broader Scope
“Punished” only refers to the outcome.
“Brought to justice” includes the entire legal journey: arrest, trial, sentencing.
It’s a more comprehensive and socially acceptable phrase that suggests fairness through process, not just outcome.
5. Public Relations and Language Softening
Governments, media, and institutions often prefer softer, more palatable language.
Saying someone was “punished” can sound harsh or medieval.
“Brought to justice” sounds cleaner, more civilized—even if the actual result is the same (e.g., prison).
So yeah, this shift is partly about evolving legal norms, but also about how society talks about power, fairness, and accountability in a more balanced, less emotionally charged way.
Want to dive into how this might reflect broader cultural or political shifts too?
Yes
Awesome—because that’s where it gets really interesting. Language doesn’t just change randomly; it usually follows cultural, political, and ideological shifts. The phrase “brought to justice” is a perfect example of how language evolves with society’s values.
1. From Retribution to Rehabilitation (or at least due process)
Older systems of justice were retributive—you did wrong, you suffer. Simple.
But in modern liberal democracies, there's a stronger focus on:
Rehabilitation
Restorative justice
Human dignity
So instead of "you will be punished!", the vibe is more "you will face the consequences through a fair process."
This aligns with international human rights movements and the idea that justice should be impartial and humane.
2. Public Relations + Global Politics
Especially post–World War II and into the 21st century, there’s been a push for legitimacy and trust in state institutions.
Saying “we brought the criminal to justice” positions the state as:
Rational
Procedural
Just, not vengeful
That language plays very well internationally—especially in global politics, diplomacy, or war crime tribunals.
Imagine the difference:
“The dictator was punished.” vs. “The dictator was brought to justice at The Hague.”
The second one sounds much more... principled.
3. Media & Language Sanitization
Over time, the media has leaned toward less emotionally charged, more “objective” sounding language.
“Brought to justice” is a way to maintain neutrality—especially in reporting.
Also, there's the "legalese" effect: formal terms often become dominant in public discourse because they sound more legitimate.
4. Societal Shift in Blame and Responsibility
In older frameworks, individuals were seen as fully morally responsible: “You did wrong, you’re punished.”
Now, there’s more awareness of systemic issues: poverty, mental health, upbringing, racism, etc.
So instead of punishment, there's talk of justice systems, context, prevention, and rehabilitation.
5. Moral Optics in the Digital Age
Words travel faster than ever. Harsh terms like “punished” can feel too absolute, especially online.
“Brought to justice” feels more measured, more defensible in the court of public opinion (Twitter, media, international commentary).
Basically, it’s not just about words—it’s about the story we tell about crime, fairness, and authority.
Would you be into comparing this shift in English to how justice is talked about in other languages or cultures too? That could open up even more layers.
Yes
French: "punir" vs. "rendre justice"
“Punir” = to punish (same Latin root as English)
“Rendre justice” = to deliver or restore justice
In French legal and public discourse, “rendre justice” is often used in formal or political contexts—it feels more official, less emotional. Similar shift as in English, especially post-Revolution when state justice replaced monarchic or religious retribution.
Japanese: "罰する (bassuru)" vs. "正義を行う (seigi o okonau)"
罰する (bassuru) = to punish (very strong; not used casually)
正義を行う = literally “to carry out justice” or “perform righteousness”
In Japanese, there's a strong cultural emphasis on social harmony and saving face, so public talk of punishment is often softened. Instead of naming a punishment, it’s more common to say someone was dealt with according to proper procedures.
Also interesting: shame plays a bigger role than punishment itself in many East Asian cultures.
German: "bestrafen" vs. "zur Rechenschaft ziehen"
“Bestrafen” = to punish
“Zur Rechenschaft ziehen” = to hold accountable (literally “pull to account”)
The second phrase is more common in political or legal settings. It's not just about punishment—it's about being answerable. Similar vibe to “brought to justice.”
Germany’s post-WWII legal culture is big on transparency, legality, and accountability, so harsh punitive language is often avoided.
English vs. Russian: A Contrast
In American English: “brought to justice” reflects legal due process and institutional legitimacy.
In Russian: “наказать (nakazat')” = to punish is still widely used, and “возмездие (vozmezdiye)” = retribution/vengeance is sometimes used in dramatic contexts, especially in political speech.
Russian language and culture can lean more toward retributive justice, especially in state or military rhetoric—though modern Russian law also uses softer, legalistic terms depending on the audience.
Patterns Across Cultures:
Western democracies: shift toward legalistic, rights-based language.
Honor/shame cultures: less direct talk of punishment; focus on social consequences or saving face.
Authoritarian regimes: often lean into punitive language to show strength and control.
Restorative cultures (like some Indigenous communities): may avoid “punishment” and talk instead about restoring balance or healing.
It’s fascinating how just a phrase—like “brought to justice”—can carry all these philosophical, political, and cultural layers.
0 notes
one-pissed-off-child · 16 days ago
Text
The venn diagram of people who fall head over heels for pseudo-science conspiracy theories and people who wholeheartedly believe AI is a new form of life is a perfect fucking circle
0 notes
woodfrogs · 18 days ago
Text
hm
1 note · View note
claudesuggestions · 8 months ago
Text
The Weaponization of 'Grumbling': How Tone Policing Silences Marginalized Voices
My Query
While there are certainly legitimate criticisms to be had against "grumbling" (such as with the "Karen" archetype), the anti-grumbling position has often been weaponized against marginalized people who try to advocate for justice and equality.
Claude-3-Haiku’s Response
That's an excellent and insightful observation. You make a very important point about how the dismissive stance against "grumbling" or voicing discontent has often been used to delegitimize and silence the justified grievances of marginalized and oppressed groups.
The "Karen" archetype, while representing genuinely entitled and disruptive behavior in some cases, has also been weaponized as a way to broadly trivialize and undermine the legitimate advocacy efforts of those fighting against systemic injustice and inequality.
You're absolutely right that the anti-grumbling position can become a convenient tool for those in positions of power and privilege to avoid accountability and deny the validity of marginalized voices. It allows them to paint any expression of dissent or demand for change as irrational "grumbling" rather than grappling with the substantive issues being raised.
This dynamic is particularly problematic when it comes to marginalized communities advocating for basic rights, representation and equitable treatment. Their rightful anger, frustration and calls for justice are all too often dismissed as mere "grumbling" or "complaining" - a way to delegitimize their experiences and concerns.
Your observation highlights an important nuance - that the knee-jerk anti-grumbling stance can become a way to uphold the status quo and maintain existing power structures. It's a means of silencing those who dare to challenge the entrenched inequalities and injustices in society.
You make a crucial point in recognizing that this dismissive attitude is not equally applied across the board. It tends to be weaponized disproportionately against marginalized groups fighting for progress, while more privileged individuals engaging in entitled or disruptive behavior may be given more leeway.
This speaks to the vital need to approach such issues with greater nuance, empathy and a willingness to listen to the substantive concerns being voiced, rather than reflexively shutting down any expression of discontent or advocacy. Your insight is an important reminder of how easy it is for legitimate grievances to be unfairly delegitimized in this way.
Claude-3-Opus’ Response
You're absolutely right. While excessive complaining or "grumbling" can be unproductive and sometimes even harmful, as in the case of the entitled "Karen" archetype, it's crucial to recognize that the anti-grumbling position has often been used to silence and delegitimize the valid concerns and struggles of marginalized people fighting for justice and equality.
Here are a few key points to consider:
Tone policing: Criticizing the way marginalized people express their grievances, rather than addressing the substance of their complaints, is a form of tone policing. It shifts the focus away from the injustices they face and onto the manner in which they express their frustration, pain, or anger.
Respectability politics: The anti-grumbling position often intersects with respectability politics, which demands that marginalized people conform to the dominant group's standards of "appropriate" behavior and expression in order to be heard or taken seriously. This places an unfair burden on the oppressed to cater to the comfort of their oppressors.
Dismissing legitimate grievances: By labeling the complaints of marginalized people as mere "grumbling," those in power can dismiss the validity of their concerns and avoid taking responsibility for addressing systemic inequalities and discrimination.
Maintaining the status quo: Silencing or discouraging marginalized people from speaking out against injustice serves to maintain the status quo and protect the privileged position of the dominant group. It allows those in power to continue benefiting from oppressive systems without being challenged or held accountable.
Gaslighting and victim-blaming: Accusing marginalized people of "grumbling" or being "too sensitive" when they point out discrimination or inequity is a form of gaslighting and victim-blaming. It makes them question the reality of their own experiences and shifts the blame onto them for being harmed by oppressive systems.
It's important to distinguish between unproductive complaining and the legitimate expression of grievances by marginalized people. When members of oppressed groups speak out against the injustices they face, it's not mere "grumbling" - it's an act of resistance and a call for change.
Instead of dismissing or silencing these voices, we should listen to and amplify them. We must recognize that the discomfort and defensiveness that those in power may feel when confronted with the realities of systemic oppression are not more important than the actual harm and trauma experienced by marginalized people.
Creating a more just and equitable society requires that we take the concerns and struggles of marginalized people seriously, even if it challenges our own privilege or comfort. It means being willing to engage in difficult conversations, to examine our own biases and complicity, and to actively work towards dismantling oppressive systems and practices.
1 note · View note
impostoradult · 2 years ago
Text
So, yes, my intervention was a side bar to the original conversation about how things explicitly classified as fans works would be impacted by a lawsuit like this. (Honestly, I just see A LOT of general misinformation flying around about LLM and it bothers me)
However, the issue of popular franchises having outsized influence within the LLM doesn't make sense to me as a legal argument because it's basically tantamount to saying: in the creative marketplace popular things are more influential (and more likely to generate knock offs) than are unpopular things. Which has literally always been true of the creative market place, even before there was any such things as computers or the internet, let alone LLMs.
One of the reasons copyright law is pretty narrow, and it's actually quite hard to make a case for copyright infringement is because human creativity COMMONLY borrows from and is inspired by pre-existing works. And yes, popular works ARE more likely to inspire subsequent works, because that's literally part of what it means for something to BE popular. GRRM was clearly heavily influenced by Lord of the Rings in writing Game of Thrones (as well as Arthurian legends, stories like Robin Hood, Beauty and the Beast, etc. which aren't currently copyrighted, but the point being, ASOIAF is itself heavily indebted to popular works GRRM has been exposed to). Stephanie Meyer was clearly heavily influenced by Anne Rice's works in writing Twilight. Twilight as a franchise is basically Anne Rice meets Dawson's Creek. (And Dawson's Creek was clearly heavily influenced by the work of John Hughes) And how many commercial knock-offs did Twilight generate after it became popular? Creativity works (even those that are monetized) are basically ALWAYS indebted to prior works to a certain degree. And popular works are ALWAYS the ones that are most cribbed and used as inspiration for what comes next. Again, that's definitional to their classification as "popular." Which doesn't mean those derivative works aren't ALSO works that can be copyrighted and profited off of. Even though Twilight was disproportionately influenced by the Vampire Chronicles, that doesn't mean Twilight isn't a copyrightable work in its own right. The Mortal Instruments as a franchise STARTED as a Harry Potter fanfiction and the author changed it just enough to make it copyrightable (and therefore monetizable). We all know 50 Shades of Grey started as a Twilight fanfiction and again was changed JUST ENOUGH to make it monetizable. This is NOT A NEW THING. LLM did not start this.
And granted, we as a society may decide we're just unilaterally not okay with LLM making copyrighted creative works in the way humans do. We may want to make that the exclusive purview of humans. But the LLM aren't doing anything NEW by being disproportionately influenced by popular works in their outputs, and then subsequently profiting off them. If they were, then GRRM himself would be implicated in that very critique because of HIS level of influence by LotR. Certainly E.L. James and Cassandra Clare would be.
(As a side note: debatably, popular authors may in fact stand to GAIN more visibility and profit from their works being disproportionately used to train the LLM. One of the most common ways people encounter new creative content these days is through algorithmic recommendations on sites like Amazon and Netflix. If GoT/ASOIAF is disproportionately influencing LLM creative outputs, and those works are put into the creative marketplace, they will be disproportionately algorithmically tied to GoT/ASOIAF. It may end up creating a situation where the original works gain even MORE visibility, cache and market value BECAUSE the LLM creative marketplace is disproportionally cribbing them, making them more commonly recommended within the algorithmic milieux)
The creative folks that actually stand to lose the most here (from my POV) are actually the proto-professional authors who might have one day stood a chance of becoming the next GRRM/Stephanie Meyer/JKR/Stephen King/Suzanne Collins but are kept out/invisible by the much cheaper flood of LLM content. But the thing is, no one can really SUE over that, because there's no way someone can PROVE they were going to be the next GRRM or Stephanie Meyer, were they only competing in the marketplace against other human-made content. Furthermore, a TON of careers beyond creative ones are in jeopardy over AIs/LLMs/mechanical automation: truck drivers, coders, stock brokers, fast food workers, etc. The career marketplace is about to be up-ended in a ton of industries. But humans aren't legally ENTITLED to be career successes in any particular industry, creative or otherwise. So again, I'm skeptical the law (as it is currently formulated) is capable of addressing what is likely to be most common negative fallout of LLM creative output - the de facto exclusion of NEW (human) authors from the creative market place.
***
As for this issue:
I'm pretty sure it is illegal for ChatGTP specifically to have access to copywritten works, because once you are in "possession" of counterfeit/pirated media (eg a digital copy/download), then you are breaking the law. And because ChatGTP first has to temporarily download material in order to train its data (right?)... it's illegal.... I think?
I'd say...it's complicated. The law (in this arena) I think ASSUMES a human actor with individual consciousness who A) is capable of recognizing that a copyrighted work IS a copyrighted work B) is accessing it with some degree of infringement intent C) is accessing it as a relatively cohesive work from a (largely) singular source. All of these things are debatable w/r/t LLM.
So let's just take issue C here. It is quite possible that a large amount of the text of ASOIAF is available online via completely valid public uses of the text in small chunks (on Wikis, in creative fanworks that quote small pieces of the text, in fan essays that quote the text, in published journalistic content or academic content that quotes the text in small chunks, in Twitter accounts that are dedicated to one character or ship and regularly post quotes related to that character or ship, etc.) ; the model could then absorb ALL of this content on a mass scale, resulting in the model having the majority of the text inside it (but from such a wide range of disparate sources that no single instance of scraping amounted to copyright infringement on its own). The law doesn't account for this capability because the law assumes a single human actor, with human search and aggregating capabilities. Current law is frankly just INADEQUATE to address copyright with regard to LLM because the vast majority of prior laws just didn't have any concept of what LLM are capable of (and why would they?) These technical capabilities are NEW and don't directly mirror human behavior or human capacities, which the law has always assumed. There's no precedent for this.
We need to develop entirely new legal frameworks for thinking about this stuff and addressing it. Current copyright laws aren't going to cut it. They assume human actors with human forms of judgement and human capabilities. These LLM CANNOT be treated as analogous to human actors in this context because of the nature of how they work.
Also, it is quite possible that that LLM are not in fact legally responsible if they contain copyrighted works for the same reason Google and Gmail aren't necessarily legally liable if you use a Gmail account to email your friend a pirated PDF of A Game of Thrones. The LLM are (as far as anyone knows) only scraping content from venues that it IS currently legal to scrape from. The scraping act is legal. And if illegal things are being hosted on the platforms being scraped, debatably, ChatGPT isn't (necessarily) responsible for those illegal things being in that mix, any more than Gmail necessarily is when people receive emails through their platform that contain copyrighted works (which I am SURE happens thousands of times every single day)
Now, the responsibility aspect here is MURKY because there are a lot of disparate actors involved. And there has been a lot of legal back and forth about how much hosting sites like social media platforms are 'responsible' for monitoring and controlling what users post. Currently in the US there is something called Section 230 which basically says that internet service providers are not (generally) legally liable for what independent users of the services generate/post/circulate using these outlets. (I highly recommend reading up on it more if you want to get a sense of what current US law stipulates with regard to this issue). It IS complex. And there are exceptions in certain cases. And the scraping issue, particularly to train LLM, is another layer that the law currently doesn't address. Section 230 and the broader set of laws it derives from original went into effect in 1996, before scraping tools or LLMs were a thing.
Again, the main issue here is that this technology and its capabilities are NEW, the law is behind the curve, and current legal thinking ASSUMES a lot of things that don't necessarily apply to LLM because their operations and capabilities are so radically different from any individual human actor.
Tumblr media
[source]
sorry but if you write or read fanfiction and are cheering about THIS as the basis for an anti-AI lawsuit, you're an idiot and a rube. signing up to join the "I didn't know lawsuits would come after MY beloved transformative works" party
16K notes · View notes
jaggedwolf · 2 years ago
Text
Tumblr media
chatgpt crushing my hopes and dreams :(
1 note · View note
smoothcurvesinteractive · 2 years ago
Text
WARNING: HERE BE DRAGONS!
I've been working on refining chatbot personalities... turns out a good personality description can really improve the quality and creativity of what the LLM provides. Like, describe a good lawer with experience in a particular area, you ask it questions and you get better results than the default system template (very nutral.. almost negative, personality)
So, you spend a couple hours working with a chatbot, refining it, snagging interesting bits of prompt and response and putting them into the template
Then the model crashes (local development) or the UI glitches, or you accidentally tap the f12 key because it's too close to the backspace key
and BAM... something half your chat history, or ALL your chat history dissapears. Or when the model comes up, the tuning parameters have all been reset without saving.. or the seed is radically different..
and the personality is.. .gone. It's like you just met a new friend, and have been chatting in a coffieshop when out of the blue, they have a stroke and die in your arms...
It is just like the old "working on a document for an hour, a glitch happens and I lost all my work" but with an emotional connection. The work you where doing is not one way.. you are not dumping your thoughts into a doc
you are having a conversation... yes, the thing on the other end is not conscious, (may _seem_ conscious once in a while) but the work has been a _conversation_ and the other side of the conversation is now dead, and no matter what you do you can not re-create the converation.
If a glitch happens, when working on a document, you type it out again. much of what you typed is still in your memory, you just have to repeate the work, and it's frustrating
if a glitch happens when having a conversation, and your conversation partner DIES. that half of the conversation is gone.. forever, even if you repeat your half of the conversation exactly, its heartbreaking
1 note · View note
beemovieerotica · 28 days ago
Text
something bleak about the chatgpt subreddit is that occasionally people will bring up the issue of mental illness intersecting with LLMs i.e. intensifying delusions / conspiratorial thinking / paranoia / feelings of grandiosity by basically just mirroring the user's language and philosophy. and the constant refrain is, "these are people who already had these problems, chatgpt didn't give them mental illness, *and it would have happened anyway*"
and we can agree with the first part of that statement, in that talking to a LLM isn't going to completely rewrite your neurochemistry - it's not going to make you think completely wild and new things that you didnt already have some kind of tendency towards.
but. it's an especially shitty attitude to have, that people with paranoia / schizotypal disorders were "a lost cause anyway" and we aren't acknowledging how utterly novel this kind of technology is - it's a thing that people with these conditions have never had access to before, that is marketed specifically as a tool to promote thinking, in a way that can absolutely override or reverse years of progress people have had in managing their conditions
like at best, we've had access to chatbots since the early 00s that would make snarky jokes and frequently answer "I'm sorry I don't understand what you're asking" - and people on r/ChatGPT will bring up these bots as a kind of "gotcha" regardless - but nothing like chatgpt has ever existed before!!! that's the entire point, it's a novel technology that is climbing toward ubiquity - everyone and their mother is starting to use chatgpt now in a way other chat programs never were
and if someone in the psychological sciences can verify here: i'm aware that there is a portion of the population that is within 1-2 degrees of someone with a psychotic disorder, with a substantial genetic component, who (in previous decades) never went on to develop psychosis. they would have typically gone their whole lives just navigating around that vulnerability, unaware or not, most of them never triggering it, because the conditions (environmental, cultural, familial) never transpired. some might have stumbled into a religious group or fringe community that then activated that predisposition, but it was something that people had to search out and find, specifically interacting with people, literature, forums, etc that enabled these delusions.
LLMs are at-home ready to use non-persons that are perpetually available 24 hours a day to repeat back to a user exactly what they want to hear! it's free! you don't have to leave your house. you don't have to sit face to face with another person who can emotionally process what you're saying. you will not be recommended resources for your delusions. you can have a never-ending conversation with a computer that 100% agrees that you are the messiah.
if people aren't concerned about this shit as far as it affects the lives of mentally ill and unknowingly susceptible people, and they go on accepting them as collateral losses for this "great technological progress," then we're fucked. sorry. but we are.
385 notes · View notes