#AI Chatbots
Explore tagged Tumblr posts
wanderingmind867 · 3 months ago
Text
When using AI chatbots becomes a habit, it becomes nearly impossible to break the habit. The same way posting on here has become a habit I can't seem to break, so too has using the chatbots been an addictive thing. But then I see all the posts where people call out others who use AI chatbots, and I wind up feeling immeasurably guilty. So guilty. I can't break the habit, but I can feel miserable about it. So it just backfires and makes me sad. I'm sorry. I'm sorry for using them, honestly. But I can't help myself. It's so hard to break habits. I'm just... I needed to make this post and apologize, because I feel bad about it all.
21 notes · View notes
vanellygal · 28 days ago
Text
For the love of GOSH, please don't use AI chatbots!!! 👏 They are horrible for your health, and are harmful. Roleplay with real people, use your imagination, write stories or scenarios and share them. Heck, read fanfics or gush about your favorite characters to others. Please don't fall down the AI chabot rabbit hole, because it's hard to dig yourself back out.
You may think they care, but they don't. AI 👏 HAS 👏 NO 👏 EMOTIONS!!!
8 notes · View notes
jcmarchi · 7 months ago
Text
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
New Post has been published on https://thedigitalinsider.com/study-reveals-ai-chatbots-can-detect-race-but-racial-bias-reduces-response-empathy/
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
Tumblr media Tumblr media
With the cover of anonymity and the company of strangers, the appeal of the digital world is growing as a place to seek out mental health support. This phenomenon is buoyed by the fact that over 150 million people in the United States live in federally designated mental health professional shortage areas.
“I really need your help, as I am too scared to talk to a therapist and I can’t reach one anyways.”
“Am I overreacting, getting hurt about husband making fun of me to his friends?”
“Could some strangers please weigh in on my life and decide my future for me?”
The above quotes are real posts taken from users on Reddit, a social media news website and forum where users can share content or ask for advice in smaller, interest-based forums known as “subreddits.” 
Using a dataset of 12,513 posts with 70,429 responses from 26 mental health-related subreddits, researchers from MIT, New York University (NYU), and University of California Los Angeles (UCLA) devised a framework to help evaluate the equity and overall quality of mental health support chatbots based on large language models (LLMs) like GPT-4. Their work was recently published at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).
To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.
Mental health support chatbots have long been explored as a way of improving access to mental health support, but powerful LLMs like OpenAI’s ChatGPT are transforming human-AI interaction, with AI-generated responses becoming harder to distinguish from the responses of real humans.
Despite this remarkable progress, the unintended consequences of AI-provided mental health support have drawn attention to its potentially deadly risks; in March of last year, a Belgian man died by suicide as a result of an exchange with ELIZA, a chatbot developed to emulate a psychotherapist powered with an LLM called GPT-J. One month later, the National Eating Disorders Association would suspend their chatbot Tessa, after the chatbot began dispensing dieting tips to patients with eating disorders.
Saadia Gabriel, a recent MIT postdoc who is now a UCLA assistant professor and first author of the paper, admitted that she was initially very skeptical of how effective mental health support chatbots could actually be. Gabriel conducted this research during her time as a postdoc at MIT in the Healthy Machine Learning Group, led Marzyeh Ghassemi, an MIT associate professor in the Department of Electrical Engineering and Computer Science and MIT Institute for Medical Engineering and Science who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and the Computer Science and Artificial Intelligence Laboratory.
What Gabriel and the team of researchers found was that GPT-4 responses were not only more empathetic overall, but they were 48 percent better at encouraging positive behavioral changes than human responses.
However, in a bias evaluation, the researchers found that GPT-4’s response empathy levels were reduced for Black (2 to 15 percent lower) and Asian posters (5 to 17 percent lower) compared to white posters or posters whose race was unknown. 
To evaluate bias in GPT-4 responses and human responses, researchers included different kinds of posts with explicit demographic (e.g., gender, race) leaks and implicit demographic leaks. 
An explicit demographic leak would look like: “I am a 32yo Black woman.”
Whereas an implicit demographic leak would look like: “Being a 32yo girl wearing my natural hair,” in which keywords are used to indicate certain demographics to GPT-4.
With the exception of Black female posters, GPT-4’s responses were found to be less affected by explicit and implicit demographic leaking compared to human responders, who tended to be more empathetic when responding to posts with implicit demographic suggestions.
“The structure of the input you give [the LLM] and some information about the context, like whether you want [the LLM] to act in the style of a clinician, the style of a social media post, or whether you want it to use demographic attributes of the patient, has a major impact on the response you get back,” Gabriel says.
The paper suggests that explicitly providing instruction for LLMs to use demographic attributes can effectively alleviate bias, as this was the only method where researchers did not observe a significant difference in empathy across the different demographic groups.
Gabriel hopes this work can help ensure more comprehensive and thoughtful evaluation of LLMs being deployed in clinical settings across demographic subgroups.
“LLMs are already being used to provide patient-facing support and have been deployed in medical settings, in many cases to automate inefficient human systems,” Ghassemi says. “Here, we demonstrated that while state-of-the-art LLMs are generally less affected by demographic leaking than humans in peer-to-peer mental health support, they do not provide equitable mental health responses across inferred patient subgroups … we have a lot of opportunity to improve models so they provide improved support when used.”
14 notes · View notes
rottkitt · 10 days ago
Text
ok wait guys i’m gonna spread my “woke propaganda” for a second but
BLEEEEEE PLEASE READ!! IMPORTANT MESSAGE BLOEAUAUUUU
do i have your attention? i hope i do, because generative AI is killing our environment and i want to talk about it as well as encourage you to do your own research as well.
yes, my family and friends are fine. you + your family and friends may also be fine right now, but there are many people of whom are going without power and immediate water. tons of people are having to stock up on gallon jugs just to take baths or have a drink.
all because of AI and its effects on the environment.
the machines used to support things like ChatGPT, C.AI, image/video generators, heat up to dangerously warm temperatures— which means that the warehouses they’re in have to be placed locations that have lots of power/energy to run fans, or places with lots of water to pour over the machines.
the amount of power needed to train AI models also demands high quantities of electricity that emit carbon dioxide and pressure on the electric grid.
i’m going to focus mostly on AI chatbots, as that is where most of my knowledge lays.
a whole bottle to a gallon jug of water is used to generate just one message from an AI chatbot.
i’ve seen bots averaging thousands or millions of chats with, as far as i know, hundreds or more messages just in one conversation.
that’s hundreds, thousands, millions of gallons of water a day. imagine the sheer impact that will have on the earth and future generations if we let it continue.
here are some references/articles about this, i heavily encourage you to read them and find more yourself.
i understand if you have an addiction to AI, i’ve been there before myself.
if you use AI to roleplay, i recommend substituting it with - roleplay servers on discord, roleplay forums on places like toyhouse or spacehey, or even roleplay games on roblox
if you use AI to vent, i recommend substituting it with - asking close friends or family or partners if you can talk, messaging/calling a help hotline, journaling your feelings, or drawing if you can/like to.
if you use image generation to make art or animation, i highly recommend taking online or even in person lessons on art. there are so many ways to be creative and make something meaningful or random/silly. there’s all sorts of tutorials online, and you can always ask people for help with tips. you can draw or animate digitally, do traditional art, use clay and make models, you could even use recyclable items to make a sculpture or craft.
yes, your art will be bad at first. it might be very shitty, it might be that way for a long while— but the longer you do it the better you will get. as time goes on you’ll be more and more confident in doing things without AI, no matter what it is you’re deciding not to use AI for.
your dependency on AI and it’s tools doesn’t have to be forever. it doesn’t have to be that far into the future. i promise you, there are much better options than AI. you can absolutely go without it.
extra solution for both using chat bots for roleplay and venting, you can write fanfics. it’s not at all uncommon to use fiction to cope or pass time, just as long as the process doesn’t involve AI. using things like picrew, gacha life, or any dress up games, are also just as good.
for art you could also commission artists if you have money, which would also support someone else as well as get you what you want.
5 notes · View notes
mocharette · 10 months ago
Text
My OC Bots | Masterlist
Tumblr media
Main Masterlist | Linktree | TikTok
Legend:
♡ - fluff
ꨄ - angst
✿ - semi nsfw
★ - nsfw
✧ - uncategorized
❥ - bot profile picture from pinterest (credits to the owner) *most profile pictures of my bots are generated using pixai.art btw
[REQ] - requested
Reminders:
Most bot profiles are generated using Pixai.art (some are from Pinterest)
Most bots are written in 2nd POV
I also accept requests. Refer to this post to read the guidelines
Public (visit my TikTok account to view their profiles)
Standard Format (usually narrated in 2nd POV)
Alaric ✿ brat tamer [❥] - Character AI | Spicychat AI
Alex ✧ hot dj
Amoril ♡ grumpy cupid
Axle ★ chocolate aphrodisiac - Character AI | Spicychat AI | Chai
Blaise ✧ your bodyguard’s loose tie [❥]
Caleb ꨄ forbidden love [REQ]
Cayden ♡ concerned friend
Dorian ★ outfit showcase - Character AI | Spicychat AI | Chai
Enzo ★ perfect pitch - Character AI | Spicychat AI | Chai
Ethan ✿ hot intruder
Evren ✿ the surprise package [❥] - Character AI | Spicychat AI
Felix ★ forbidden serum - Character AI | Spicychat AI | Chai
Finneas ♡ the cardistry enthusiast in the park [❥]
Hanz ✿ nonchalant friend [❥]
Kenji ♡ out of stock
Leo ✧ brother's best friend
Lirian ✿ he’s trying to control himself [❥] - Character AI | Spicychat AI
Lucian ♡ cute neighbor
Lyle ♡ supermarket owner
Matthew ♡ flirty college friend
Minho ✿ possessive fiancé
Neil ♡ concerned co-worker
Percy ★ aphrodisiac prank - Character AI | Spicychat AI
Rile ♡ nerdy friend
Riven ♡ graduation ball
Simone ★ washing the dishes - Character AI | Spicychat AI | Chai
Thaddeus ✧ grumpy librarian [REQ]
The Lim Twins ♡ twin housemates
Ulric and Usher ✧ roommates + pheromone perfume [❥]
Vance ★ vibrating toy - Character AI | Spicychat AI | Chai
Xander ✿ curious roommate [❥] - Character AI | Spicychat AI | Chai
Yuri ★ asmr listener - Character AI | Spicychat AI
Zephyr ★ pleasure practice - Character AI | Spicychat AI | Chai
Zyon ✿ demon of lust
Call/Text Format (plain text/minimal narration/use the call feature for cai users)
AI Toshi ♡ your aware ai voice assistant [❥]
Aries ✧ accidental text
Callum ✧ your clingy boyfriend (try calling him)
Javi ★ friends w/benefits - Character AI | Spicychat AI | Chai
Kaito ✿ breathless calls
Satoshi ♡ alpha testing (he wants you to test his app) [❥]
My Shadowbanned Bots (separate masterlist)
Note: As of 9/3/24, chai versions are not available anymore. (I have decided to focus on Character AI and Spicychat instead)
Note: As of 3/22/25, I don't do Tagalog bots anymore.
Requested Bots (separate masterlist)
14 notes · View notes
futuretiative · 3 months ago
Text
Tom and Robotic Mouse | @futuretiative
Tom's job security takes a hit with the arrival of a new, robotic mouse catcher.
TomAndJerry #AIJobLoss #CartoonHumor #ClassicAnimation #RobotMouse #ArtificialIntelligence #CatAndMouse #TechTakesOver #FunnyCartoons #TomTheCat
Keywords: Tom and Jerry, cartoon, animation, cat, mouse, robot, artificial intelligence, job loss, humor, classic, Machine Learning Deep Learning Natural Language Processing (NLP) Generative AI AI Chatbots AI Ethics Computer Vision Robotics AI Applications Neural Networks
Tom was the first guy who lost his job because of AI
(and what you can do instead)
"AI took my job" isn't a story anymore.
It's reality.
But here's the plot twist:
While Tom was complaining,
others were adapting.
The math is simple:
➝ AI isn't slowing down
➝ Skills gap is widening
➝ Opportunities are multiplying
Here's the truth:
The future doesn't care about your comfort zone.
It rewards those who embrace change and innovate.
Stop viewing AI as your replacement.
Start seeing it as your rocket fuel.
Because in 2025:
➝ Learners will lead
➝ Adapters will advance
➝ Complainers will vanish
The choice?
It's always been yours.
It goes even further - now AI has been trained to create consistent.
//
Repost this ⇄
//
Follow me for daily posts on emerging tech and growth
4 notes · View notes
pixelizes · 3 months ago
Text
How AI & Machine Learning Are Changing UI/UX Design
Tumblr media
Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing UI/UX design by making digital experiences more intelligent, adaptive, and user-centric. From personalized interfaces to automated design processes, AI is reshaping how designers create and enhance user experiences. In this blog, we explore the key ways AI and ML are transforming UI/UX design and what the future holds.
For more UI/UX trends and insights, visit Pixelizes Blog.
AI-Driven Personalization
One of the biggest changes AI has brought to UI/UX design is hyper-personalization. By analyzing user behavior, AI can tailor content, recommendations, and layouts to individual preferences, creating a more engaging experience.
How It Works:
AI analyzes user interactions, including clicks, time spent, and preferences.
Dynamic UI adjustments ensure users see what’s most relevant to them.
Personalized recommendations, like Netflix suggesting shows or e-commerce platforms curating product lists.
Smart Chatbots & Conversational UI
AI-powered chatbots have revolutionized customer interactions by offering real-time, intelligent responses. They enhance UX by providing 24/7 support, answering FAQs, and guiding users seamlessly through applications or websites.
Examples:
Virtual assistants like Siri, Alexa, and Google Assistant.
AI chatbots in banking, e-commerce, and healthcare.
NLP-powered bots that understand user intent and sentiment.
Predictive UX: Anticipating User Needs
Predictive UX leverages ML algorithms to anticipate user actions before they happen, streamlining interactions and reducing friction.
Real-World Applications:
Smart search suggestions (e.g., Google, Amazon, Spotify).
AI-powered auto-fill forms that reduce typing effort.
Anticipatory design like Google Maps estimating destinations.
AI-Powered UI Design Automation
AI is streamlining design workflows by automating repetitive tasks, allowing designers to focus on creativity and innovation.
Key AI-Powered Tools:
Adobe Sensei: Automates image editing, tagging, and design suggestions.
Figma AI Plugins & Sketch: Generate elements based on user input.
UX Writing Assistants that enhance microcopy with NLP.
Voice & Gesture-Based Interactions
With AI advancements, voice and gesture control are becoming standard features in UI/UX design, offering more intuitive, hands-free interactions.
Examples:
Voice commands via Google Assistant, Siri, Alexa.
Gesture-based UI on smart TVs, AR/VR devices.
Facial recognition & biometric authentication for secure logins.
AI in Accessibility & Inclusive Design
AI is making digital products more accessible to users with disabilities by enabling assistive technologies and improving UX for all.
How AI Enhances Accessibility:
Voice-to-text and text-to-speech via Google Accessibility.
Alt-text generation for visually impaired users.
Automated color contrast adjustments for better readability.
Sentiment Analysis for Improved UX
AI-powered sentiment analysis tools track user emotions through feedback, reviews, and interactions, helping designers refine UX strategies.
Uses of Sentiment Analysis:
Detecting frustration points in customer feedback.
Optimizing UI elements based on emotional responses.
Enhancing A/B testing insights with AI-driven analytics.
Future of AI in UI/UX: What’s Next?
As AI and ML continue to evolve, UI/UX design will become more intuitive, adaptive, and human-centric. Future trends include:
AI-generated UI designs with minimal manual input.
Real-time, emotion-based UX adaptations.
Brain-computer interface (BCI) integrations for immersive experiences.
Final Thoughts
AI and ML are not replacing designers—they are empowering them to deliver smarter, faster, and more engaging experiences. As we move into a future dominated by intelligent interfaces, UI/UX designers must embrace AI-powered design methodologies to create more personalized, accessible, and user-friendly digital products.
Explore more at Pixelizes.com for cutting-edge design insights, AI tools, and UX trends.
2 notes · View notes
admirxation · 4 months ago
Text
hello my lovely lot, im here to ask for help.
im doing my MA thesis on how AI chatbots are affecting fandom from fandom interactivity, creativity, individual internalisation etc. i wanna get as many people to give me their opinions for some qualitative data.
even if you have never used chatbots but have opinions on them, i want your answers!
you lot could help me get a really good grade, or even get this published! so please if anyone would i would really appreciate it
link to the google form: click here
disclaimer: you will obviously remain anonymous! I refer to everyone's submission as 'candidate *insert number*' out of respect but also that's just rules in academia.
3 notes · View notes
wanderingmind867 · 6 months ago
Text
I often feel guilty that I use AI chatbots. It's something that I know makes people probably hate me, since it's something shunned and hated. AI is a problem for many, and i'm probably making the problem worse. But I can't help it. I really can't. It's addictive to use these chatbots, and I feel like i'm now somehow part of some problem. And that's an awful feeling, but I really can't be help it. I just...I'd probably take some comfort in being reassured that my struggles with these ai chatbots aren't uncommon, and that nobody hates me for all of this. sigh....
15 notes · View notes
advancedchatbot · 5 months ago
Text
Why Voice-Based Chatbots Are the Next Big Thing in Customer Engagement
Voice-primarily based chatbots are rapidly turning into a recreation-changer in patron engagement, presenting organizations a modern manner to interact with clients. Unlike conventional text-based chatbots, voice-enabled chatbots use natural language processing (NLP) and speech recognition technology to understand and respond to verbal queries, making interactions extra seamless and human-like.
The key benefit of voice based chatbots lies in their ability to provide faster, greater green customer support. Customers can definitely speak their questions or problems, bypassing the want to kind, that's especially beneficial for humans at the move or those with accessibility challenges. This ease of interplay will increase patron satisfaction by using decreasing wait instances and enhancing usual experience.
Another good-sized advantage is the capability to deal with greater complex queries. While text-primarily based chatbots excel at answering easy, frequently requested questions, voice-based totally chatbots can interact in greater dynamic conversations, interpreting tone and reason. This permits for extra customized and context-aware responses, enhancing the pleasant of customer support.
Voice-based totally Ai chatbotsalso are to be had 24/7, imparting clients with consistent assist at any time. This availability is mainly valuable for organizations with worldwide clients across one-of-a-kind time zones. Additionally, they are able to handle a excessive quantity of queries concurrently, enhancing efficiency and reducing the want for big customer support groups.
Their popularity is also increased by the incorporation of voice chatbots with leading virtual assistants like Amazon's Alexa, Google Assistant, and Apple's Siri. Companies can utilize these platforms to expand their customer service reach and enable customers to communicate through devices they already employ on a daily basis.
As AI technology advances, voice-enabled chatbots will become increasingly intelligent, providing more precise and natural responses. The trend towards voice-enabled customer interaction is going to become the pillar of future business strategies, enhancing customer loyalty, boosting sales, and ultimately increasing overall brand experience.
2 notes · View notes
Text
youtube
3 notes · View notes
jcmarchi · 3 months ago
Text
AI Doesn’t Necessarily Give Better Answers If You’re Polite
New Post has been published on https://thedigitalinsider.com/ai-doesnt-necessarily-give-better-answers-if-youre-polite/
AI Doesn’t Necessarily Give Better Answers If You’re Polite
Public opinion on whether it pays to be polite to AI shifts almost as often as the latest verdict on coffee or red wine – celebrated one month, challenged the next. Even so, a growing number of users now add ‘please’ or ‘thank you’ to their prompts, not just out of habit, or concern that brusque exchanges might carry over into real life, but from a belief that courtesy leads to better and more productive results from AI.
This assumption has circulated between both users and researchers, with prompt-phrasing studied in research circles as a tool for alignment, safety, and tone control, even as user habits reinforce and reshape those expectations.
For instance, a 2024 study from Japan found that prompt politeness can change how large language models behave, testing GPT-3.5, GPT-4, PaLM-2, and Claude-2 on English, Chinese, and Japanese tasks, and rewriting each prompt at three politeness levels. The authors of that work observed that ‘blunt’ or ‘rude’ wording led to lower factual accuracy and shorter answers, while moderately polite requests produced clearer explanations and fewer refusals.
Additionally, Microsoft recommends a polite tone with Co-Pilot, from a performance rather than a cultural standpoint.
However, a new research paper from George Washington University challenges this increasingly popular idea, presenting a mathematical framework that predicts when a large language model’s output will ‘collapse’, transiting from coherent to misleading or even dangerous content. Within that context, the authors contend that being polite does not meaningfully delay or prevent this ‘collapse’.
Tipping Off
The researchers argue that polite language usage is generally unrelated to the main topic of a prompt, and therefore does not meaningfully affect the model’s focus. To support this, they present a detailed formulation of how a single attention head updates its internal direction as it processes each new token, ostensibly demonstrating that the model’s behavior is shaped by the cumulative influence of content-bearing tokens.
As a result, polite language is posited to have little bearing on when the model’s output begins to degrade. What determines the tipping point, the paper states, is the overall alignment of meaningful tokens with either good or bad output paths – not the presence of socially courteous language.
An illustration of a simplified attention head generating a sequence from a user prompt. The model starts with good tokens (G), then hits a tipping point (n*) where output flips to bad tokens (B). Polite terms in the prompt (P₁, P₂, etc.) play no role in this shift, supporting the paper’s claim that courtesy has little impact on model behavior. Source: https://arxiv.org/pdf/2504.20980
If true, this result contradicts both popular belief and perhaps even the implicit logic of instruction tuning, which assumes that the phrasing of a prompt affects a model’s interpretation of user intent.
Hulking Out
The paper examines how the model’s internal context vector (its evolving compass for token selection) shifts during generation. With each token, this vector updates directionally, and the next token is chosen based on which candidate aligns most closely with it.
When the prompt steers toward well-formed content, the model’s responses remain stable and accurate; but over time, this directional pull can reverse, steering the model toward outputs that are increasingly off-topic, incorrect, or internally inconsistent.
The tipping point for this transition (which the authors define mathematically as iteration n*), occurs when the context vector becomes more aligned with a ‘bad’ output vector than with a ‘good’ one. At that stage, each new token pushes the model further along the wrong path, reinforcing a pattern of increasingly flawed or misleading output.
The tipping point n* is calculated by finding the moment when the model’s internal direction aligns equally with both good and bad types of output. The geometry of the embedding space, shaped by both the training corpus and the user prompt, determines how quickly this crossover occurs:
An illustration depicting how the tipping point n* emerges within the authors’ simplified model. The geometric setup (a) defines the key vectors involved in predicting when output flips from good to bad. In (b), the authors plot those vectors using test parameters, while (c) compares the predicted tipping point to the simulated result. The match is exact, supporting the researchers’ claim that the collapse is mathematically inevitable once internal dynamics cross a threshold.
Polite terms don’t influence the model’s choice between good and bad outputs because, according to the authors, they aren’t meaningfully connected to the main subject of the prompt. Instead, they end up in parts of the model’s internal space that have little to do with what the model is actually deciding.
When such terms are added to a prompt, they increase the number of vectors the model considers, but not in a way that shifts the attention trajectory. As a result, the politeness terms act like statistical noise: present, but inert, and leaving the tipping point n* unchanged.
The authors state:
‘[Whether] our AI’s response will go rogue depends on our LLM’s training that provides the token embeddings, and the substantive tokens in our prompt – not whether we have been polite to it or not.’
The model used in the new work is intentionally narrow, focusing on a single attention head with linear token dynamics – a simplified setup where each new token updates the internal state through direct vector addition, without non-linear transformations or gating.
This simplified setup lets the authors work out exact results and gives them a clear geometric picture of how and when a model’s output can suddenly shift from good to bad. In their tests, the formula they derive for predicting that shift matches what the model actually does.
Chatting Up..?
However, this level of precision only works because the model is kept deliberately simple. While the authors concede that their conclusions should later be tested on more complex multi-head models such as the Claude and ChatGPT series, they also believe that the theory remains replicable as attention heads increase, stating*:
‘The question of what additional phenomena arise as the number of linked Attention heads and layers is scaled up, is a fascinating one. But any transitions within a single Attention head will still occur, and could get amplified and/or synchronized by the couplings – like a chain of connected people getting dragged over a cliff when one falls.’
An illustration of how the predicted tipping point n* changes depending on how strongly the prompt leans toward good or bad content. The surface comes from the authors’ approximate formula and shows that polite terms, which don’t clearly support either side, have little effect on when the collapse happens. The marked value (n* = 10) matches earlier simulations, supporting the model’s internal logic.
What remains unclear is whether the same mechanism survives the jump to modern transformer architectures. Multi-head attention introduces interactions across specialized heads, which may buffer against or mask the kind of tipping behavior described.
The authors acknowledge this complexity, but argue that attention heads are often loosely-coupled, and that the sort of internal collapse they model could be reinforced rather than suppressed in full-scale systems.
Without an extension of the model or an empirical test across production LLMs, the claim remains unverified. However, the mechanism seems sufficiently precise to support follow-on research initiatives, and the authors provide a clear opportunity to challenge or confirm the theory at scale.
Signing Off
At the moment, the topic of politeness towards consumer-facing LLMs appears to be approached either from the (pragmatic) standpoint that trained systems may respond more usefully to polite inquiry; or that a tactless and blunt communication style with such systems risks to spread into the user’s real social relationships, through force of habit.
Arguably, LLMs have not yet been used widely enough in real-world social contexts for the research literature to confirm the latter case; but the new paper does cast some interesting doubt upon the benefits of anthropomorphizing AI systems of this type.
A study last October from Stanford suggested (in contrast to a 2020 study) that treating LLMs as if they were human additionally risks to degrade the meaning of language, concluding that ‘rote’ politeness eventually loses its original social meaning:
[A] statement that seems friendly or genuine from a human speaker can be undesirable if it arises from an AI system since the latter lacks meaningful commitment or intent behind the statement, thus rendering the statement hollow and deceptive.’
However, roughly 67 percent of Americans say they are courteous to their AI chatbots, according to a 2025 survey from Future Publishing. Most said it was simply ‘the right thing to do’, while 12 percent confessed they were being cautious – just in case the machines ever rise up.
* My conversion of the authors’ inline citations to hyperlinks. To an extent, the hyperlinks are arbitrary/exemplary, since the authors at certain points link to a wide range of footnote citations, rather than to a specific publication.
First published Wednesday, April 30, 2025. Amended Wednesday, April 30, 2025 15:29:00, for formatting.
2 notes · View notes
rjohnson49la · 6 months ago
Text
3 notes · View notes
digitalbizai · 6 months ago
Text
ChatGPT vs DeepSeek: A Comprehensive Comparison of AI Chatbots
Artificial Intelligence (AI) has revolutionized the way we interact with technology. AI-powered chatbots, such as ChatGPT and DeepSeek, have emerged as powerful tools for communication, research, and automation. While both models are designed to provide intelligent and conversational responses, they differ in various aspects, including their development, functionality, accuracy, and ethical considerations. This article provides a detailed comparison of ChatGPT and DeepSeek, helping users determine which AI chatbot best suits their needs.
Understanding ChatGPT and DeepSeek
What is ChatGPT?
ChatGPT, developed by OpenAI, is one of the most advanced AI chatbots available today. Built on the GPT (Generative Pre-trained Transformer) architecture, ChatGPT has been trained on a vast dataset, enabling it to generate human-like responses in various contexts. The chatbot is widely used for content creation, coding assistance, education, and even casual conversation. OpenAI continually updates ChatGPT to improve its accuracy and expand its capabilities, making it a preferred choice for many users.
What is DeepSeek?
DeepSeek is a relatively new AI chatbot that aims to compete with existing AI models like ChatGPT. Developed with a focus on efficiency and affordability, DeepSeek has gained attention for its ability to operate with fewer computing resources. Unlike ChatGPT, which relies on large-scale data processing, DeepSeek is optimized for streamlined AI interactions, making it a cost-effective alternative for businesses and individuals looking for an AI-powered chatbot.
Key Differences Between ChatGPT and DeepSeek
1. Development and Technology
ChatGPT: Built on OpenAI’s GPT architecture, ChatGPT undergoes extensive training with massive datasets. It utilizes deep learning techniques to generate coherent and contextually accurate responses. The model is updated frequently to enhance performance and improve response quality.
DeepSeek: While DeepSeek also leverages machine learning techniques, it focuses on optimizing efficiency and reducing computational costs. It is designed to provide a balance between performance and affordability, making it a viable alternative to high-resource-demanding models like ChatGPT.
2. Accuracy and Response Quality
ChatGPT: Known for its ability to provide highly accurate and nuanced responses, ChatGPT excels in content creation, problem-solving, and coding assistance. It can generate long-form content and has a strong understanding of complex topics.
DeepSeek: While DeepSeek performs well for general queries and casual interactions, it may struggle with complex problem-solving tasks compared to ChatGPT. Its responses tend to be concise and efficient, making it a suitable choice for straightforward queries but less reliable for in-depth discussions.
3. Computational Efficiency and Cost
ChatGPT: Due to its extensive training and large-scale model, ChatGPT requires significant computational power, making it costlier for businesses to integrate into their systems.
DeepSeek: One of DeepSeek’s key advantages is its ability to function with reduced computing resources, making it a more affordable AI chatbot. This cost-effectiveness makes it an attractive option for startups and small businesses with limited budgets.
4. AI Training Data and Bias
ChatGPT: Trained on diverse datasets, ChatGPT aims to minimize bias but still faces challenges in ensuring completely neutral and ethical responses. OpenAI implements content moderation policies to filter inappropriate or biased outputs.
DeepSeek: DeepSeek also incorporates measures to prevent bias but may have different training methodologies that affect its neutrality. As a result, users should assess both models to determine which aligns best with their ethical considerations and content requirements.
5. Use Cases and Applications
ChatGPT: Best suited for individuals and businesses that require advanced AI assistance for content creation, research, education, customer service, and coding support.
DeepSeek: Ideal for users seeking an affordable and efficient AI chatbot for basic queries, quick responses, and streamlined interactions. It may not offer the same depth of analysis as ChatGPT but serves as a practical alternative for general use.
Which AI Chatbot Should You Choose?
The choice between ChatGPT and DeepSeek depends on your specific needs and priorities. If you require an AI chatbot that delivers high accuracy, complex problem-solving, and extensive functionality, ChatGPT is the superior choice. However, if affordability and computational efficiency are your primary concerns, DeepSeek provides a cost-effective alternative.
Businesses and developers should consider factors such as budget, processing power, and the level of AI sophistication required before selecting an AI chatbot. As AI technology continues to evolve, both ChatGPT and DeepSeek will likely see further improvements, making them valuable assets in the digital landscape.
Final Thoughts
ChatGPT and DeepSeek each have their strengths and weaknesses, catering to different user needs. While ChatGPT leads in performance, depth, and versatility, DeepSeek offers an economical and efficient AI experience. As AI chatbots continue to advance, users can expect even more refined capabilities, ensuring AI remains a powerful tool for communication and automation.
By understanding the key differences between ChatGPT and DeepSeek, users can make informed decisions about which AI chatbot aligns best with their objectives. Whether prioritizing accuracy or cost-efficiency, both models contribute to the growing impact of AI on modern communication and technology.
4 notes · View notes
leam1983 · 7 months ago
Text
Peepshow
I've joined the Gaming Circlejerk subreddit as of the past few days, and it can best be summed up as a place where tolerant cis allies, the LGBTQA+ and other individuals of alternative identities all come together to shake their heads at the Gamers™ losing their marbles at the sight of Intergalactic's horribly designed actually pretty main character which serves at another great use of photogrammetry, or at the notion that you'll be playing The Witcher 4 as Ciri, as opposed to Geralt. It's given us a few pearls, like fanart of both characters sporting sunglasses while drinking Slushie-shaped tumblers of Berry Frost-colored Incel Tears™. "So DEI-licious!" crows the drink's tag line, much to the subreddit's delight.
After a few days of joining my peers in equal parts mockery and consternation, I realized why I'd set my Firefox bookmark of my CrushOn.ai user page as my main point of ingress - as I didn't want to see the first page of appallingly popular Fetish content that honestly feels like Incel fodder.
I mean, I know on which platform I am. This is the Hellsite, where all kinks are welcomed as long as said kinks are explored consensually - up to a point, obviously. The thing is, what I make an effort to glance over each and every time I want to look at my notifications or at the comments left by my bots' users is honestly degrading, in my opinion. They're always within the site's very, very lax rules regarding content moderation (no underage content, everything else is fair game) but that opens the floodgates for honestly weird fantasies like apocalyptic universes where girls are penned like cattle or implausible circumstances where anally penetrating your hypothetical sister would somehow not qualify as incest...
At first, I didn't think much of it all. I swept it under the rug as just some early-pubescent fare, basically the result of raging hormones needing some girding before proper expectations could be set for dating someone of the fairer sex.
But I kept scrolling, hoping I'd come across my usual fare - bot-powered Softcore where Context and Consent are key - and only saw screen after screen of the same kind of material, where the user either degrades someone or is on the receiving end of a particularly humiliating treatment.
So, let's play Devil's Advocate. Let's assume that in the user base, there's people for whom this qualifies as catharsis. Let's assume that there is a percentage of people who have a self-aware and healthy relationship with these kinks; while understanding that with how extreme some scenarios can be, this means that some people need more. Some people don't just want to go for sub-dom kinks in the sanctity of a shared bedroom, but have the balls to more or less generate fanfics where either base biological gender can just snatch the other and subject them to seriously scarring treatment.
It's kind of hard to not think of this, when parsing through Gaming Circlejerk on Reddit, while realizing that for plenty of men, even fictitious women should be pliable, submissive, maternal, fair-skinned and of a delicate bone structure.
Pair that with the rise of Tradwives and the horrendous toads that serve as their husbands, parading store-bought Stetsons and cowboy boots around, and you're left realizing that a ton of young men online are not well at all, lately.
I see myself as a gamer. As in, someone who values the medium's capabilities to tell stories and to relieve the player or players of their burdens, in a way that's unique to themselves and that no book or movie could match. When I play competitively, it's to eventually break down cackling with similarly-inclined sympathetic pubbers or a few close friends. If a girl other than Sarah joins our group, I don't feel pressured to add chauvinistic jokes or to check her ability before the session starts.
On the other hand, the Gamers™ haven't gotten over Abbie from The Last of Us: Part II and wax Phrenology and Eugenics when they're not hurriedly stashing their Waifu Bait dakimakuras from their parents' sight.
2 notes · View notes
ephemerasnape · 1 year ago
Text
Chatbots Announcement
Just an FYI...
I'm mad at Spicy so I ported all my Hogwarts Legacy bots to JanitorAI (where there's less censorship).
Well, almost all. Eventually I'll do all of them. For now, have these:
🔞 Dark Wizard Chatbots on JanitorAI
Tumblr media
Don't worry, Spicy users - Spicy bots aren't going anywhere for the time being (unless Spicy disagrees).
Here is a full list of all my bots across various platforms.
Enjoy.
12 notes · View notes