Tumgik
#AI chatbot use cases
aichatgpt · 2 years
Text
The video showcases how AI chat GPT technology can help your online business in a number of ways. Some of the benefits of using this technology include improved customer service and satisfaction, enhanced data-driven insights, increased cost savings, and higher efficiency. Additionally, the video explains the advantages of using AI chat GPT compared to traditional methods, such as the ability to generate automated responses to customer inquiries in real-time. Finally, the video provides tips for implementing and optimizing AI chat GPT in your online business, such as coding examples and best practices for getting the most out of this technology.
0 notes
demonic-shadowlucifer · 10 months
Text
I wanna talk briefly about the AI debate because some of the takes I've seen are very much pissing me off, especially since most of those takes aren't helpful at all (and some are just straight up bullying). I already posted about this on my other blog (post in question has since been deleted since it was kinda harsh and ngl very dismissive of very valid concerns!), but the biggest issue I have with the Anti-AI crowd (And, to be honest, the AI debate in general) is that it feels like they're getting mad at the wrong things. No, AI itself is not the problem. No, someone calling themself an "AI artist" is not the problem. No, using AI for fun is not the problem. No, partaking in some AI trend is not the problem. No, someone simply generating AI images is not the problem. The actual problem is: -People feeding other people's art into AI generators and then claiming it as their own (Scraping basically) -People putting other people's writing into AI chatbots/AI text generators (ex. ChatGPT) to "finish" the fic (Again, scraping). -People using AI to make eerily realistic Not SFW deepfakes of either people they know or celebrities. -Corporations and companies screwing over artists, musicians and actors in favor of AI (such as replacing them). -People using AI to make racist/queerphobic/misogynistic/ otherwise bigoted stuff (Something that I've also been seeing unfortunately) -People not being honest about using AI (Transparency, people!) -People using AI to mimic other people's voices without those people's consent (not sure how to word this but i'm sure some of you know what I mean). -The fact that there's almost no regulations when it comes to AI. AI gets a lot of criticism, and it should! Until it becomes more ethical and there's regulations imposed, we should still be skeptical of it. However, I feel like we've gone very off track when it comes to criticizing AI. Personally, I don't think someone posting an AI-generated image of an elf with wings surrounded by mushrooms and rainbows makes them a thief by itself.. But if they made that image using someone else's art, then in that case yes they are a thief! And no, someone partaking in the Pixar AI trend is probably not going to cost you your job. You know what will cost you your job though? Companies favoring AI over actual living beings. So maybe instead of getting mad at someone using Character.AI or posting an AI-generated gothic phoenix, how about we get mad at corporations screwing artists/actors/musicians over and the people using AI with genuine malicious intent?
Tumblr media
(Image ID: A banner that is blue with flowers framing it. The text reads "OP is a minor. Please respect my boundaries" End ID)
29 notes · View notes
reveationlabs · 10 months
Text
Benefits of Integrating ChatGPT and NLP: From Context to Code Generation
Tumblr media
Imagine having a chat with your computer, and it understands you like a friend. That's the magic of ChatGPT! It's a language model developed by OpenAI, and its superpower lies in understanding and generating human-like text. In simpler terms, it's like having a virtual buddy who talks just like you do.
Why does ChatGPT matter in NLP?
Natural Language Processing is all about making computers understand and respond to human language. ChatGPT takes this to a whole new level. It's not just about getting a correct answer; it's about having a meaningful conversation. People are excited about ChatGPT because it's bridging the gap between humans and machines like never before.
The Rise of AI-Driven Conversations
In recent times, there's been a buzz around AI-driven conversational models. Think about it – from virtual assistants on our phones to smart speakers in our homes, we're increasingly talking to machines. This shift is not just about getting information; it's about having a natural, flowing conversation with our devices.
Why the Growing Interest in ChatGPT?
The interest in AI-driven conversational models is like a spark turning into a flame. We want technology to understand us better, respond in a way that makes sense, and maybe even crack a joke or two. ChatGPT is at the forefront of this revolution, making conversations with machines more intuitive, friendly, and, dare we say, fun!
In this blog, we're going to unravel the secrets behind ChatGPT and explore how it's shaping the landscape of NLP. 
Contents Discussed in this Blog
Potential Benefits of Integration of ChatGPT and NLP
How ChatGPT understand context in a conversation?
How does ChatGPT handle ambiguity in language?
Does ChatGPT have a memory of past interactions?
How is ChatGPT fine-tuned for specific applications?
How ChatGPT can be used for generating code or technical content?
0 notes
webcluesinfotech · 1 year
Text
How Can Generative AI Enhance Your Business's Organisational and Operational Processes
Tumblr media
Experience the future of business transformation with Generative AI. From customer support to content creation, sales optimization, and HR process automation, this cutting-edge technology is your key to enhanced efficiency and innovation. Learn how Generative AI can automate and elevate your operations while delivering personalized experiences and data-driven insights.
At Webclues Infotech, we're your partners in harnessing the true potential of Generative AI. Join us in this technological revolution and take your business to new heights in a fast-paced world. Read the blog for more insights.
0 notes
wordstome · 8 months
Text
how c.ai works and why it's unethical
Okay, since the AI discourse is happening again, I want to make this very clear, because a few weeks ago I had to explain to a (well meaning) person in the community how AI works. I'm going to be addressing people who are maybe younger or aren't familiar with the latest type of "AI", not people who purposely devalue the work of creatives and/or are shills.
The name "Artificial Intelligence" is a bit misleading when it comes to things like AI chatbots. When you think of AI, you think of a robot, and you might think that by making a chatbot you're simply programming a robot to talk about something you want them to talk about, and it's similar to an rp partner. But with current technology, that's not how AI works. For a breakdown on how AI is programmed, CGP grey made a great video about this several years ago (he updated the title and thumbnail recently)
youtube
I HIGHLY HIGHLY recommend you watch this because CGP Grey is good at explaining, but the tl;dr for this post is this: bots are made with a metric shit-ton of data. In C.AI's case, the data is writing. Stolen writing, usually scraped fanfiction.
How do we know chatbots are stealing from fanfiction writers? It knows what omegaverse is [SOURCE] (it's a Wired article, put it in incognito mode if it won't let you read it), and when a Reddit user asked a chatbot to write a story about "Steve", it automatically wrote about characters named "Bucky" and "Tony" [SOURCE].
I also said this in the tags of a previous reblog, but when you're talking to C.AI bots, it's also taking your writing and using it in its algorithm: which seems fine until you realize 1. They're using your work uncredited 2. It's not staying private, they're using your work to make their service better, a service they're trying to make money off of.
"But Bucca," you might say. "Human writers work like that too. We read books and other fanfictions and that's how we come up with material for roleplay or fanfiction."
Well, what's the difference between plagiarism and original writing? The answer is that plagiarism is taking what someone else has made and simply editing it or mixing it up to look original. You didn't do any thinking yourself. C.AI doesn't "think" because it's not a brain, it takes all the fanfiction it was taught on, mixes it up with whatever topic you've given it, and generates a response like in old-timey mysteries where somebody cuts a bunch of letters out of magazines and pastes them together to write a letter.
(And might I remind you, people can't monetize their fanfiction the way C.AI is trying to monetize itself. Authors are very lax about fanfiction nowadays: we've come a long way since the Anne Rice days of terror. But this issue is cropping back up again with BookTok complaining that they can't pay someone else for bound copies of fanfiction. Don't do that either.)
Bottom line, here are the problems with using things like C.AI:
It is using material it doesn't have permission to use and doesn't credit anybody. Not only is it ethically wrong, but AI is already beginning to contend with copyright issues.
C.AI sucks at its job anyway. It's not good at basic story structure like building tension, and can't even remember things you've told it. I've also seen many instances of bots saying triggering or disgusting things that deeply upset the user. You don't get that with properly trigger tagged fanworks.
Your work and your time put into the app can be taken away from you at any moment and used to make money for someone else. I can't tell you how many times I've seen people who use AI panic about accidentally deleting a bot that they spent hours conversing with. Your time and effort is so much more stable and well-preserved if you wrote a fanfiction or roleplayed with someone and saved the chatlogs. The company that owns and runs C.AI can not only use whatever you've written as they see fit, they can take your shit away on a whim, either on purpose or by accident due to the nature of the Internet.
DON'T USE C.AI, OR AT THE VERY BARE MINIMUM DO NOT DO THE AI'S WORK FOR IT BY STEALING OTHER PEOPLES' WORK TO PUT INTO IT. Writing fanfiction is a communal labor of love. We share it with each other for free for the love of the original work and ideas we share. Not only can AI not replicate this, but it shouldn't.
(also, this goes without saying, but this entire post also applies to ai art)
5K notes · View notes
weird-writes · 1 year
Text
so apparently sudowrite was trained on a corpus that includes ao3, so i hope it enjoyed my smut about this guy from star wars. i can’t tell you how proud i am that somewhere deep in its algorithm it will forever slightly associate the words “thoracic plates” and “squirting” because of me.
1 note · View note
gangadhar7758 · 2 years
Text
Advantages of Using Chatbots in Real Estate
Tumblr media
Chatbots used in real estate are essentially virtual agents that save time and free up live agents to focus on more complex aspects of the job. These ai chatbots are used to contact potential customers through various messaging apps. The intent is to convert these prospects into conversations and turn them into existing customers.
Read more: Advantages of Using Chatbots in Real Estate
0 notes
amica-mart · 2 years
Text
ChatGPT for Internet Marketers (PLR) information
From: Eric Holmlund and Ryan Greene
Dear Entrepreneur,
AI is taking the world by storm, and everyone is clamoring to get a piece of it.
It was recently announced that ChatGPT was the fastest app in history to reach 100 Million users. It took Instagram over 2 years, and TikTok 9 months to reach that milestone.
The reason it is growing so rapidly, is because entrepreneurs and businesses around the world want to harness this technology to get more done and make more money... while doing less work! Sounds great, right?
The problem is, there is a lot of misinformation about it, and there are currently hundreds of millions of people trying to FIGURE OUT HOW TO USE IT. So...
We've created a masterclass teaching internet marketers how to use ChatGPT to increase their productivity and make more money online.
You can download that course from this page today just $5
Tumblr media
0 notes
newtik · 2 years
Text
what is ChatGPT – ChatGPT explained – ChatGPT examples
0 notes
Text
Google is (still) losing the spam wars to zombie news-brands
Tumblr media
I'm touring my new, nationally bestselling novel The Bezzle! Catch me TONIGHT (May 3) in CALGARY, then TOMORROW (May 4) in VANCOUVER, then onto Tartu, Estonia, and beyond!
Tumblr media
Even Google admits – grudgingly – that it is losing the spam wars. The explosive proliferation of botshit has supercharged the sleazy "search engine optimization" business, such that results to common queries are 50% Google ads to spam sites, and 50% links to spam sites that tricked Google into a high rank (without paying for an ad):
https://developers.google.com/search/blog/2024/03/core-update-spam-policies#site-reputation
It's nice that Google has finally stopped gaslighting the rest of us with claims that its search was still the same bedrock utility that so many of us relied upon as a key piece of internet infrastructure. This not only feels wildly wrong, it is empirically, provably false:
https://downloads.webis.de/publications/papers/bevendorff_2024a.pdf
Not only that, but we know why Google search sucks. Memos released as part of the DOJ's antitrust case against Google reveal that the company deliberately chose to worsen search quality to increase the number of queries you'd have to make (and the number of ads you'd have to see) to find a decent result:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
Google's antitrust case turns on the idea that the company bought its way to dominance, spending the some of the billions it extracted from advertisers and publishers to buy the default position on every platform, so that no one ever tried another search engine, which meant that no one would invest in another search engine, either.
Google's tacit defense is that its monopoly billions only incidentally fund these kind of anticompetitive deals. Mostly, Google says, it uses its billions to build the greatest search engine, ad platform, mobile OS, etc that the public could dream of. Only a company as big as Google (says Google) can afford to fund the R&D and security to keep its platform useful for the rest of us.
That's the "monopolistic bargain" – let the monopolist become a dictator, and they will be a benevolent dictator. Shriven of "wasteful competition," the monopolist can split their profits with the public by funding public goods and the public interest.
Google has clearly reneged on that bargain. A company experiencing the dramatic security failures and declining quality should be pouring everything it has to righting the ship. Instead, Google repeatedly blew tens of billions of dollars on stock buybacks while doing mass layoffs:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Those layoffs have now reached the company's "core" teams, even as its core services continue to decay:
https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528
(Google's antitrust trial was shrouded in secrecy, thanks to the judge's deference to the company's insistence on confidentiality. The case is moving along though, and warrants your continued attention:)
https://www.thebignewsletter.com/p/the-2-trillion-secret-trial-against
Google wormed its way into so many corners of our lives that its enshittification keeps erupting in odd places, like ordering takeout food:
https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security
Back in February, Housefresh – a rigorous review site for home air purifiers – published a viral, damning account of how Google had allowed itself to be overrun by spammers who purport to provide reviews of air purifiers, but who do little to no testing and often employ AI chatbots to write automated garbage:
https://housefresh.com/david-vs-digital-goliaths/
In the months since, Housefresh's Gisele Navarro has continued to fight for the survival of her high-quality air purifier review site, and has received many tips from insiders at the spam-farms and Google, all of which she recounts in a followup essay:
https://housefresh.com/how-google-decimated-housefresh/
One of the worst offenders in spam wars is Dotdash Meredith, a content-farm that "publishes" multiple websites that recycle parts of each others' content in order to climb to the top search slots for lucrative product review spots, which can be monetized via affiliate links.
A Dotdash Meredith insider told Navarro that the company uses a tactic called "keyword swarming" to push high-quality independent sites off the top of Google and replace them with its own garbage reviews. When Dotdash Meredith finds an independent site that occupies the top results for a lucrative Google result, they "swarm a smaller site’s foothold on one or two articles by essentially publishing 10 articles [on the topic] and beefing up [Dotdash Meredith sites’] authority."
Dotdash Meredith has keyword swarmed a large number of topics. from air purifiers to slow cookers to posture correctors for back-pain:
https://housefresh.com/wp-content/uploads/2024/05/keyword-swarming-dotdash.jpg
The company isn't shy about this. Its own shareholder communications boast about it. What's more, it has competition.
Take Forbes, an actual news-site, which has a whole shadow-empire of web-pages reviewing products for puppies, dogs, kittens and cats, all of which link to high affiliate-fee-generating pet insurance products. These reviews are not good, but they are treasured by Google's algorithm, which views them as a part of Forbes's legitimate news-publishing operation and lets them draft on Forbes's authority.
This side-hustle for Forbes comes at a cost for the rest of us, though. The reviewers who actually put in the hard work to figure out which pet products are worth your money (and which ones are bad, defective or dangerous) are crowded off the front page of Google and eventually disappear, leaving behind nothing but semi-automated SEO garbage from Forbes:
https://twitter.com/ichbinGisele/status/1642481590524583936
There's a name for this: "site reputation abuse." That's when a site perverts its current – or past – practice of publishing high-quality materials to trick Google into giving the site a high ranking. Think of how Deadspin's private equity grifter owners turned it into a site full of casino affiliate spam:
https://www.404media.co/who-owns-deadspin-now-lineup-publishing/
The same thing happened to the venerable Money magazine:
https://moneygroup.pr/
Money is one of the many sites whose air purifier reviews Google gives preference to, despite the fact that they do no testing. According to Google, Money is also a reliable source of information on reprogramming your garage-door opener, buying a paint-sprayer, etc:
https://money.com/best-paint-sprayer/
All of this is made ten million times worse by AI, which can spray out superficially plausible botshit in superhuman quantities, letting spammers produce thousands of variations on their shitty reviews, flooding the zone with bullshit in classic Steve Bannon style:
https://escapecollective.com/commerce-content-is-breaking-product-reviews/
As Gizmodo, Sports Illustrated and USA Today have learned the hard way, AI can't write factual news pieces. But it can pump out bullshit written for the express purpose of drafting on the good work human journalists have done and tricking Google – the search engine 90% of us rely on – into upranking bullshit at the expense of high-quality information.
A variety of AI service bureaux have popped up to provide AI botshit as a service to news brands. While Navarro doesn't say so, I'm willing to bet that for news bosses, outsourcing your botshit scams to a third party is considered an excellent way of avoiding your journalists' wrath. The biggest botshit-as-a-service company is ASR Group (which also uses the alias Advon Commerce).
Advon claims that its botshit is, in fact, written by humans. But Advon's employees' Linkedin profiles tell a different story, boasting of their mastery of AI tools in the industrial-scale production of botshit:
https://housefresh.com/wp-content/uploads/2024/05/Advon-AI-LinkedIn.jpg
Now, none of this is particularly sophisticated. It doesn't take much discernment to spot when a site is engaged in "site reputation abuse." Presumably, the 12,000 googlers the company fired last year could have been employed to check the top review keyword results manually every couple of days and permaban any site caught cheating this way.
Instead, Google is has announced a change in policy: starting May 5, the company will downrank any site caught engaged in site reputation abuse. However, the company takes a very narrow view of site reputation abuse, limiting punishments to sites that employ third parties to generate or uprank their botshit. Companies that produce their botshit in-house are seemingly not covered by this policy.
As Navarro writes, some sites – like Forbes – have prepared for May 5 by blocking their botshit sections from Google's crawler. This can't be their permanent strategy, though – either they'll have to kill the section or bring it in-house to comply with Google's rules. Bringing things in house isn't that hard: US News and World Report is advertising for an SEO editor who will publish 70-80 posts per month, doubtless each one a masterpiece of high-quality, carefully researched material of great value to Google's users:
https://twitter.com/dannyashton/status/1777408051357585425
As Navarro points out, Google is palpably reluctant to target the largest, best-funded spammers. Its March 2024 update kicked many garbage AI sites out of the index – but only small bottom-feeders, not large, once-respected publications that have been colonized by private equity spam-farmers.
All of this comes at a price, and it's only incidentally paid by legitimate sites like Housefresh. The real price is borne by all of us, who are funneled by the 90%-market-share search engine into "review" sites that push low quality, high-price products. Housefresh's top budget air purifier costs $79. That's hundreds of dollars cheaper than the "budget" pick at other sites, who largely perform no original research.
Google search has a problem. AI botshit is dominating Google's search results, and it's not just in product reviews. Searches for infrastructure code samples are dominated by botshit code generated by Pulumi AI, whose chatbot hallucinates nonexistence AWS features:
https://www.theregister.com/2024/05/01/pulumi_ai_pollution_of_search/
This is hugely consequential: when these "hallucinations" slip through into production code, they create huge vulnerabilities for widespread malicious exploitation:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
We've put all our eggs in Google's basket, and Google's dropped the basket – but it doesn't matter because they can spend $20b/year bribing Apple to make sure no one ever tries a rival search engine on Ios or Safari:
https://finance.yahoo.com/news/google-payments-apple-reached-20-220947331.html
Google's response – laying off core developers, outsourcing to low-waged territories with weak labor protections and spending billions on stock buybacks – presents a picture of a company that is too big to care:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
Google promised us a quid-pro-quo: let them be the single, authoritative portal ("organize the world’s information and make it universally accessible and useful"), and they will earn that spot by being the best search there is:
https://www.ft.com/content/b9eb3180-2a6e-41eb-91fe-2ab5942d4150
But – like the spammers at the top of its search result pages – Google didn't earn its spot at the center of our digital lives.
It cheated.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
Tumblr media
Image: freezelight (modified) https://commons.wikimedia.org/wiki/File:Spam_wall_-_Flickr_-_freezelight.jpg
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.en
885 notes · View notes
dduane · 9 months
Text
Do tell.
"Although X owner Elon Musk suggested that forcing users to pay for verification would help to weed out the bots (aka automated accounts) on the platform, that does not appear to be the case. A video gaining views on rival platform Instagram Threads shows X search results where numerous bots, including many verified with a blue check, are posting a variation of the phrase “I’m sorry, I cannot provide a response as it goes against OpenAI’s use case policy.”
"The response is what OpenAI’s chatbot says when a user asks a question or requests that it perform a task in violation of OpenAI’s terms of service. In this case, it’s also an indication that the X account in question is using AI to create its posts."
305 notes · View notes
chloeangelic · 6 months
Text
update 3/26/24
Hey, long time no speak! I’m here to quickly inform you all about two things:
1. I have permanently removed all of my writing from here because I’m getting increasingly freaked out by chatbots, tumblr farming content (as if opting out will do anything), and all other creepy and nefarious use of AI to steal writers’ work. Writers and other creators are treated like absolute shit on here and I feel too protective over my own work. A lot of my masterlist also didn't feel representative anymore, and knowing it was all here was making it harder to write. I’ve felt sort of chained to my tumblr account, so for my own psychological wellbeing I just have to cut it off. 
My current writing is on my AO3 and I am regularly updating it there, however a lot of my old stuff has been privated cause I’m not happy with it at this point. I have backups of everything, so if there’s a specific piece of writing you miss and that you can’t find on my AO3/you can’t access AO3, please message me on discord at chloeangelic and I will make it available for download/send you the file! 
2. I will not be returning to this tumblr in any capacity, not now and not in the future. I will not stick around to answer messages either, so if you’re reading this and want to talk, I have logged out permanently, and the only way to get in contact with me is on discord or in my AO3 comment section. I go back and forth on deactivating this account, and it might happen in the future, so be prepared for that possibility. 
In case you didn’t know I left tumblr and you’re feeling like the pikachu meme right now, feel free to read this. 
I hope you’re all having a great year, and I’ll see you on AO3! <3
168 notes · View notes
Text
Once the AI bubble bursts, that doesn’t mean chatbots and image generators will be relegated to the trash bin of history. Rather, there will be a reassessment of where it makes sense to implement them, and if attention moves on too fast, they may be able to do that with minimal pushback. The challenge visual artists and video game workers are already finding with employers making use of generative AI to worsen the labor conditions in their industries may become entrenched, especially if artists fail in their lawsuits against AI companies for training on their work without permission. But it could be far worse than that. Microsoft is already partnering with Palantir to feed generative AI into militaries and intelligence agencies, while governments around the world are looking at how they can implement generative AI to reduce the cost of service delivery, often without effective consideration of the potential harms that can come of relying on tools that are well known to output false information. This is a problem Resisting AI author Dan McQuillan has pointed to as a key reason why we must push back against these technologies. There are already countless examples of algorithmic systems have been used to harm welfare recipients, childcare benefit applicants, immigrants, and other vulnerable groups. We risk a repetition, if not an intensification, of those harmful outcomes. When the AI bubble bursts, investors will lose money, companies will close, and workers will lose jobs. Those developments will be splashed across the front pages of major media organizations and will receive countless hours of public discussion. But it’s those lasting harms that will be harder to immediately recognize, and that could fade as the focus moves on to whatever Silicon Valley places starts pushing as the foundation of its next investment cycle. All the benefits Altman and his fellow AI boosters promised will fade, just as did the promises of the gig economy, the metaverse, the crypto industry, and countless others. But the harmful uses of the technology will stick around, unless concerted action is taken to stop those use cases from lingering long after the bubble bursts.
16 August 2024
58 notes · View notes
spacexseven · 2 years
Text
demon dazai ai is here!!
now you can chat with your housemate from hell! (based on the yan! demonzai au on my blog)
but first...i'd like to preface this by saying a few things (sorry for the longgggg text)
please do your own research if you have privacy/security concerns about character ai as a website
this doesn't mean i won't be writing for demonzai on my tumblr. if you'd prefer for me to write you things about demon dazai, i'm always happy to do so. the chatbot is just for some people who were interested in it.
please refrain from rating his responses, unless he forgets that he's a demon/you're a human, or forgets some other important character info. i've trained him to follow a specific behavioral pattern that fits how i write him, and i'd really like if you do not try to sway it. thank you!
not everything he says makes sense. please remember that you are talking to an ai, so it may say things that doesn't make sense. it uses the internet in real-time for info gathering, so it may mess up your story.
if you don't like his response (or you had to rate it one star), swipe left to see another response :D
sometimes, the ai speaks in parentheses. in this case, it's speaking from the point of view of the 'roleplayer'. yes, it's freaky at first, but it's perfectly normal. if you don't want to see it, just ignore it.
the bot should not call you by any name/gendered nickname except for 'human' or (rarely) 'darling', or use any pronouns except (you/yours), but you can still tell him what you'd like to be called. however, if you save your chat, your choices will be reflected on everyone else's chat, so you may have to reintroduce yourself/your s/i each time.
finally, please share with me your favorite moments! i'd love to see what you do with him~
thank you for reading all that! without further ado, i'll link him below and leave some sample screenshots for you if you don't know how to start :>>
start talking to your housemate from hell <3
Tumblr media Tumblr media Tumblr media
536 notes · View notes
reasonsforhope · 6 months
Text
When Swiss cardiologist Thomas F. Lüscher attended an international symposium in Turin, Italy, last summer, he encountered an unusual “attendee:” Suzanne, Chat GPT’s medical “assistant.” Suzanne’s developers were eager to demonstrate to the specialists how well their medical chatbot worked, and they asked the cardiologists to test her. 
An Italian cardiology professor told the chatbot about the case of a 27-year-old patient who was taken to his clinic in unstable condition. The patient had a massive fever and drastically increased inflammation markers. Without hesitation, Suzanne diagnosed adult-onset Still’s disease. “I almost fell off my chair because she was right,” Lüscher remembers. “This is a very rare autoinflammatory disease that even seasoned cardiologists don’t always consider.”
Lüscher — director of research, education and development and consultant cardiologist at the Royal Brompton & Harefield Hospital Trust and Imperial College London and director of the Center for Molecular Cardiology at the University of Zürich, Switzerland — is convinced that artificial intelligence is making cardiovascular medicine more accurate and effective. “AI is not only the future, but it is already here,” he says. “AI and machine learning are particularly accurate in image analysis, and imaging plays an outsize role in cardiology. AI is able to see what we don’t see. That’s impressive.” 
At the Royal Brompton Hospital in London, for instance, his team relies on AI to calculate the volume of heart chambers in MRIs, an indication of heart health. “If you calculate this manually, you need about half an hour,” Lüscher says. “AI does it in a second.” 
AI-Assisted Medicine
Few patients are aware of how significantly AI is already determining their health care. The Washington Post tracks the start of the boom of artificial intelligence in health care to 2018. That’s when the Food and Drug Administration approved the IDx-DR, the first independent AI-based diagnostic tool, which is used to screen for diabetic retinopathy. Today, according to the Post, the FDA has approved nearly 700 artificial intelligence and machine learning-enabled medical devices.
The Mayo Clinic in Rochester, Minnesota, is considered the worldwide leader in implementing AI for cardiovascular care, not least because it can train its algorithms with the (anonymized) data of more than seven million electrocardiograms (ECG). “Every time a patient undergoes an ECG, various algorithms that are based on AI show us on the screen which diagnoses to consider and which further tests are recommended,” says Francisco Lopez-Jimenez, director of the Mayo Clinic’s Cardiovascular Health Clinic. “The AI takes into account all the factors known about the patient, whether his potassium is high, etc. For example, we have an AI-based program that calculates the biological age of a person. If the person in front of me is [calculated to have a biological age] 10 years older than his birth age, I can probe further. Are there stressors that burden him?”
Examples where AI makes a sizable difference at the Mayo Clinic include screening ECGs to detect specific heart diseases, such as ventricular dysfunction or atrial fibrillation, earlier and more reliably than the human eye. These conditions are best treated early, but without AI, the symptoms are largely invisible in ECGs until later, when they have already progressed further...
Antioniades’ team at the University of Oxford’s Radcliffe Department of Medicine analyzed data from over 250,000 patients who underwent cardiac CT scans in eight British hospitals. “Eighty-two percent of the patients who presented with chest pain had CT scans that came back as completely normal and were sent home because doctors saw no indication for a heart disease,” Antioniades says. “Yet two-thirds of them had an increased risk to suffer a heart attack within the next 10 years.” In a world-first pilot, his team developed an AI tool that detects inflammatory changes in the fatty tissues surrounding the arteries. These changes are not visible to the human eye. But after training on thousands of CT scans, AI learned to detect them and predict the risk of heart attacks. “We had a phase where specialists read the scans and we compared their diagnosis with the AI’s,” Antioniades explains. “AI was always right.” These results led to doctors changing the treatment plans for hundreds of patients. “The key is that we can treat the inflammatory changes early and prevent heart attacks,” according to Antioniades. 
The British National Health Service (NHS) has approved the AI tool, and it is now used in five public hospitals. “We hope that it will soon be used everywhere because it can help prevent thousands of heart attacks every year,” Antioniades says. A startup at Oxford University offers a service that enables other clinics to send their CT scans in for analysis with Oxford’s AI tool.
Similarly, physician-scientists at the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center in Los Angeles use AI to analyze echograms. They created an algorithm that can effectively identify and distinguish between two life-threatening heart conditions that are easy to overlook: hypertrophic cardiomyopathy and cardiac amyloidosis. “These two heart conditions are challenging for even expert cardiologists to accurately identify, and so patients often go on for years to decades before receiving a correct diagnosis,” David Ouyang, cardiologist at the Smidt Heart Institute, said in a press release. “This is a machine-beats-man situation. AI makes the sonographer work faster and more efficiently, and it doesn’t change the patient experience. It’s a triple win.”
Current Issues with AI Medicine
However, using artificial intelligence in clinical settings has disadvantages, too. “Suzanne has no empathy,” Lüscher says about his experience with Chat GPT. “Her responses have to be verified by a doctor. She even says that after every diagnosis, and has to, for legal reasons.”
Also, an algorithm is only as accurate as the information with which it was trained. Lüscher and his team cured an AI tool of a massive deficit: Women’s risk for heart attacks wasn’t reliably evaluated because the AI had mainly been fed with data from male patients. “For women, heart attacks are more often fatal than for men,” Lüscher says. “Women also usually come to the clinic later. All these factors have implications.” Therefore, his team developed a more realistic AI prognosis that improves the treatment of female patients. “We adapted it with machine learning and it now works for women and men,” Lüscher explains. “You have to make sure the cohorts are large enough and have been evaluated independently so that the algorithms work for different groups of patients and in different countries.” His team made the improved algorithm available online so other hospitals can use it too...
[Lopez-Jimenez at the Mayo Clinic] tells his colleagues and patients that the reliability of AI tools currently lies at 75 to 93 percent, depending on the specific diagnosis. “Compare that with a mammogram that detects breast tumors with an accuracy of 85 percent,” Lopez-Jimenez says. “But because it’s AI, people expect 100 percent. That simply does not exist in medicine.”
And of course, another challenge is that few people have the resources and good fortune to become patients at the world’s most renowned clinics with state-of-the-art technology.
What Comes Next
“One of my main goals is to make this technology available to millions,” Lopez-Jimenez says. He mentions that Mayo is trying out high-tech stethoscopes to interpret heart signals with AI. “The idea is that a doctor in the Global South can use it to diagnose cardiac insufficiency,” Lopez-Jimenez explains. “It is already being tested in Nigeria, the country with the highest rate of genetic cardiac insufficiency in Africa. The results are impressively accurate.” 
The Mayo Clinic is also working with doctors in Brazil to diagnose Chagas disease with the help of AI reliably and early. “New technology is always more expensive at the beginning,” Lopez-Jimenez cautions, “but in a few years, AI will be everywhere and it will make diagnostics cheaper and more accurate.”
And the Children’s National Hospital in Washington developed a portable AI device that is currently being tested to screen children in Uganda for rheumatic heart disease, which kills about 400,000 people a year worldwide. The new tool reportedly has an accuracy of 90 percent. 
Both Lopez-Jimenez and Lüscher are confident that AI tools will continue to improve. “One advantage is that a computer can analyze images at 6 a.m. just as systematically as after midnight,” Lüscher points out. “A computer doesn’t get tired or have a bad day, whereas sometimes radiologists overlook significant symptoms. AI learns something and never forgets it.”
-via Reasons to Be Cheerful, March 1, 2024. Headers added by me.
--
Note:
Okay, so I'm definitely not saying that everything with AI medicine will go right, and there won't be any major issues. That's definitely not the case (the article talks about some of those issues). But regulation around medicines is generally pretty tight, and
And if it goes right, this could be HUGE for disabled people, chronically ill people, and people with any of the unfortunately many marginalizations that make doctors less likely to listen.
This could shave years off of the time it takes people to get the right diagnosis. It could get answers for so many people struggling with unknown diseases and chronic illness. If we compensate correctly, it could significantly reduce the role of bias in medicine. It could also make testing so much faster.
(There's a bunch of other articles about all of the ways that AI diagnoses are proving more sensitive and more accurate than doctors. This really is the sort of thing that AI is actually good at - data evaluation and science, not art and writing.)
This decade really is, for many different reasons, the beginning of the next revolution in medicine. Luckily, medicine is mostly pretty well-regulated - and of course that means very long testing phases. I think we'll begin to really see the fruits of this revolution in the next 10 to 15 years.
140 notes · View notes
Text
A British Columbia lawyer alleged to have submitted bogus case law “hallucinated” by an AI chatbot is now facing both an investigation from the Law Society of B.C. and potential financial consequences.
Earlier this month, it was revealed that lawyer Chong Ke allegedly used ChatGPT to prepare legal briefs in a family law case at B.C. Supreme Court.
In reviewing the submissions, lawyers for the opposing side discovered that some of the cases cited in the briefs did not, in fact, exist.
Those lawyers are now suing Ke for special costs in the case.
Full article
Tagging: @politicsofcanada
91 notes · View notes