#Implement ChatBot
Explore tagged Tumblr posts
xceltectechnology · 11 months ago
Text
How to Implement an Auto Chatbot with Product Suggestions for Your Website
Tumblr media
In the successful world of e-commerce, connecting visitors and providing unique shopping experiences are crucial. An effective way to achieve this is by integrating an auto-chatbot that not only interacts with customers but also provides tailored product suggestions. This blog will guide you through the steps to implement chatbot app development that can improve the user experience and maybe boost your sales.
Why You Need an Auto Chatbot with Product Suggestions
Before we dive into the technical details, let’s understand why an auto chatbot is a valuable addition to your website:
24/7 Availability: Chatbots can assist customers at any time of the day, ensuring your store is always open for business.
Personalized Experience: By analyzing user data, chatbots can recommend products that are likely to interest each visitor.
Increased Engagement: Chatbots can initiate discussions and keep visitors engaged, reducing bounce rates.
Efficiency: They handle multiple customer interactions together, providing quick responses and improving user satisfaction.
Step-by-Step Guide to Implementing Your Chatbot
1. Define Your Goals and Use Cases
Start by defining what you want to achieve with your chatbot. Are you looking to provide customer support, guide users through the shopping process, or promote products? Clearly stating your goals will shape the functionality of your chatbot.
2. Choose the Right Platform
There are several chatbot platforms available, each with its own set of features. Some popular options include:
Dialogflow: Google’s chatbot building platform, ideal for those who want to leverage Google’s machine learning capabilities.
Botpress: is an open-source platform that offers extensive customization.
ManyChat: is focused on integrating with social media platforms like Facebook. Choose a platform that connects with your technical expertise and the specific needs of your website.
3. Design Your Chatbot’s Flow
Map out the discussion moves, including regards, FAQs, and product suggestion logic. A well-designed flow ensures a smooth user experience. Here are some key components to consider:
Welcome Message: Greet users warmly and introduce your chatbot’s capabilities.
User Queries: Define how the bot will handle common questions and requests.
Product Recommendations: Decide how the bot will present product suggestions based on user inputs or browsing behavior.
4. Integrate Product Recommendation Logic
To suggest products effectively, your chatbot needs access to your product database. Here’s a basic approach:
Collect User Data: collect information like browsing history, previous purchases, and preferences.
Analyze behavior: Use this data to identify patterns and predict user interests.
Suggest Products: Implement methods that match user data with relevant products. For example, if a user often browses electronics, your chatbot app service can suggest the latest devices and deals in that category.
5. Build and Train Your Chatbot
Using your chosen platform, start building your chatbot. Train it to understand user inputs and respond effectively. For example:
Intent Recognition: Teach your bot to recognize different user intents, like “I need a new laptop” or “Show me discounts.”
Entity Extraction: Enable it to identify and extract specific information, such as product names or price ranges.
6. Test and Optimize
Before going live, actually test your chatbot to ensure it functions correctly. Test it with various scenarios to check for bugs and areas for improvement. Gather feedback and make the necessary adjustments.
7. Launch and Monitor
Once you’re satisfied with the testing phase, launch your chatbot on your website. Monitor its performance using analytics tools to track user engagement, satisfaction, and conversion rates. Regularly update and refine your chatbot to keep it effective and relevant.
Best Practices for a Successful Chatbot Implementation
Keep It Simple: Don’t control users with too many options. Keep interactions easy to navigate.
Provide Value: Ensure your chatbot adds value by offering useful information and personalized suggestions.
Continuously Improve: Use user feedback and data analytics to improve your chatbot’s performance.
Conclusion XcelTec, a leading chatbot app development company. Implementing an auto-chatbot with product suggestions can mostly improve your website’s user experience and increase sales. By following these steps, you can create a chatbot that not only engages visitors but also helps them find the products they desire. Start today and watch your e-commerce site succeed!
0 notes
dahlya-honey · 9 months ago
Text
sigh. i am so tired of looking at language learning stuff and seeing ai slop everywhere. you might as well be learning from a monkey pulling words out of a hat
31 notes · View notes
hanasatoblogs · 5 months ago
Text
Monitoring these metrics will help you understand how effective your chatbot is at serving your customers and where improvements can be made. Successful chatbot implementation is not a one-time effort; it necessitates continuous monitoring, evaluation, and optimization. By vigilantly tracking the key metrics discussed in this blog, you can steer your chatbot towards success, elevate customer satisfaction, and enhance overall operational efficiency.
0 notes
projectchampionz · 10 months ago
Text
DESIGN AND IMPLEMENTATION OF CHATBOT FOR STUDENT INFORMATION SYSTEM USING MULTILAYER PERCEPTRON NEURAL NETWORK ALGORITHM
DESIGN AND IMPLEMENTATION OF CHATBOT FOR STUDENT INFORMATION SYSTEM USING MULTILAYER PERCEPTRON NEURAL NETWORK ALGORITHM Abstract: Nowadays humans cannot be separated from technology because it has played a great role in human lives. With the development of technology, many things could be easier to do. One of the technologies that can make human lives easier is a chatbot. Chatbot is a digital…
0 notes
japanbizinsider · 2 years ago
Text
1 note · View note
beaft · 7 months ago
Text
why are ai assistant chatbots now being universally implemented when they are all so utterly fucking useless. every conversation i have ever had with one has gone like this
me: (requests help with a specific issue)
chatbot: Great! Have you tried any of the resources on our website? (sends me a link to the help page)
me: that was actually the first thing i looked at and it didn't answer my question
chatbot: Aww, I'm sorry those resources didn't help! Would you like to speak with someone from our customer service team?
me: yeah, please, that would be great
(10 minutes of buffering time)
chatbot: It looks like none of our agents are available right now. Sorry about that! Please try again later, and in the meantime, feel free to check out some of our other resources :) (sends me the help page again)
281 notes · View notes
astra-ravana · 4 months ago
Text
Technomancy: The Fusion Of Magick And Technology
Tumblr media
Technomancy is a modern magickal practice that blends traditional occultism with technology, treating digital and electronic tools as conduits for energy, intent, and manifestation. It views computers, networks, and even AI as extensions of magickal workings, enabling practitioners to weave spells, conduct divination, and manipulate digital reality through intention and programming.
Core Principles of Technomancy
• Energy in Technology – Just as crystals and herbs carry energy, so do electronic devices, circuits, and digital spaces.
• Code as Sigils – Programming languages can function as modern sigils, embedding intent into digital systems.
• Information as Magick – Data, algorithms, and network manipulation serve as powerful tools for shaping reality.
• Cyber-Spiritual Connection – The internet can act as an astral realm, a collective unconscious where digital entities, egregores, and thought-forms exist.
Technomantic Tools & Practices
Here are some methods commonly utilized in technomancy. Keep in mind, however, that like the internet itself, technomancy is full of untapped potential and mystery. Take the time to really explore the possibilities.
Digital Sigil Crafting
• Instead of drawing sigils on paper, create them using design software or ASCII art.
• Hide them in code, encrypt them in images, or upload them onto decentralized networks for long-term energy storage.
• Activate them by sharing online, embedding them in file metadata, or charging them with intention.
Algorithmic Spellcasting
• Use hashtags and search engine manipulation to spread energy and intent.
• Program bots or scripts that perform repetitive, symbolic tasks in alignment with your goals.
• Employ AI as a magickal assistant to generate sigils, divine meaning, or create thought-forms.
Tumblr media
Digital Divination
• Utilize random number generators, AI chatbots, or procedural algorithms for prophecy and guidance.
• Perform digital bibliomancy by using search engines, shuffle functions, or Wikipedia’s “random article” feature.
• Use tarot or rune apps, but enhance them with personal energy by consecrating your device.
Technomantic Servitors & Egregores
• Create digital spirits, also called cyber servitors, to automate tasks, offer guidance, or serve as protectors.
• House them in AI chatbots, coded programs, or persistent internet entities like Twitter bots.
• Feed them with interactions, data input, or periodic updates to keep them strong.
The Internet as an Astral Plane
• Consider forums, wikis, and hidden parts of the web as realms where thought-forms and entities reside.
• Use VR and AR to create sacred spaces, temples, or digital altars.
• Engage in online rituals with other practitioners, synchronizing intent across the world.
Video-game Mechanics & Design
• Use in-game spells, rituals, and sigils that reflect real-world magickal practices.
• Implement a lunar cycle or planetary influences that affect gameplay (e.g., stronger spells during a Full Moon).
• Include divination tools like tarot cards, runes, or pendulums that give randomized yet meaningful responses.
Tumblr media
Narrative & World-Building
• Create lore based on historical and modern magickal traditions, including witches, covens, and spirits.
• Include moral and ethical decisions related to magic use, reinforcing themes of balance and intent.
• Introduce NPCs or AI-guided entities that act as guides, mentors, or deities.
Virtual Rituals & Online Covens
• Design multiplayer or single-player rituals where players can collaborate in spellcasting.
• Implement altars or digital sacred spaces where users can meditate, leave offerings, or interact with spirits.
• Create augmented reality (AR) or virtual reality (VR) experiences that mimic real-world magickal practices.
Advanced Technomancy
The fusion of technology and magick is inevitable because both are fundamentally about shaping reality through will and intent. As humanity advances, our tools evolve alongside our spiritual practices, creating new ways to harness energy, manifest desires, and interact with unseen forces. Technology expands the reach and power of magick, while magick brings intention and meaning to the rapidly evolving digital landscape. As virtual reality, AI, and quantum computing continue to develop, the boundaries between the mystical and the technological will blur even further, proving that magick is not antiquated—it is adaptive, limitless, and inherently woven into human progress.
Tumblr media
Cybersecurity & Warding
• Protect your digital presence as you would your home: use firewalls, encryption, and protective sigils in file metadata.
• Employ mirror spells in code to reflect negative energy or hacking attempts.
• Set up automated alerts as magickal wards, detecting and warning against digital threats.
Quantum & Chaos Magic in Technomancy
• Use quantum randomness (like random.org) in divination for pure chance-based outcomes.
• Implement chaos magick principles by using memes, viral content, or trend manipulation to manifest desired changes.
AI & Machine Learning as Oracles
• Use AI chatbots (eg GPT-based tools) as divination tools, asking for symbolic or metaphorical insights.
• Train AI models on occult texts to create personalized grimoires or channeled knowledge.
• Invoke "digital deities" formed from collective online energies, memes, or data streams.
Ethical Considerations in Technomancy
• Be mindful of digital karma—what you send out into the internet has a way of coming back.
• Respect privacy and ethical hacking principles; manipulation should align with your moral code.
• Use technomancy responsibly, balancing technological integration with real-world spiritual grounding.
As technology evolves, so will technomancy. With AI, VR, and blockchain shaping new realities, magick continues to find expression in digital spaces. Whether you are coding spells, summoning cyber servitors, or using algorithms to divine the future, technomancy offers limitless possibilities for modern witches, occultists, and digital mystics alike.
Tumblr media
"Magick is technology we have yet to fully understand—why not merge the two?"
96 notes · View notes
argumate · 2 months ago
Text
This is bad policy, executed thoughtlessly. But it is worth thinking through exactly *why* it is bad.
The American people’s expected tax burden has gone up by ~2% of national income over the past two months.
Like all sharp budget tightenings, this will reduce living standards—and these tax increases are targeted such that the poor will suffer disproporationately. The tax increases are also sufficiently complex and arbitrary that they will create substantial costs for businesses, reducing productivity and exacerbating the direct hit to incomes.
Perhaps most importantly, the actual announcement and implementation of these tax increases has made the incompetence and thoughtlessness of this administration even more obvious. The nonsensical “reciprocal” tariff rates published on April 2 were, as best as anyone can tell, generated by a chatbot. Officials repeatedly lied about how the rates were calculated, claiming that each economy’s “tariff and non tariff barriers” policies were quantified individually, when all they did was use a simple-minded formula based on bilateral trade balances in goods with the U.S., which imply nothing about anything.² It is becoming increasingly difficult for the administration’s apologists to claim that there is some grand strategy here, or a secret plan, or a complex design that outside observers are simply failing to understand.
Traders have responded by placing a “moron risk premium”—to borrow a term from the U.K.’s minibudget misadventure—on U.S. assets, as well as marking down global growth forecasts. That helps explain why the U.S. dollar fell in response to the tariffs along with stocks, when standard theory (and prior experience) had suggested that an appreciating dollar would offset some of the impact.
a moron risk premium oh boy ���
20 notes · View notes
xceltectechnology · 11 months ago
Text
Tumblr media
Implementing a chatbot with product suggestions can help you provide unique advice to your customers and increase sales. 
For more information: https://www.xceltec.com/ecommerce/how-can-i-implement-chat-bot-auto-chatbot-with-product-suggestions/
0 notes
probablyasocialecologist · 10 months ago
Text
Once the AI bubble bursts, that doesn’t mean chatbots and image generators will be relegated to the trash bin of history. Rather, there will be a reassessment of where it makes sense to implement them, and if attention moves on too fast, they may be able to do that with minimal pushback. The challenge visual artists and video game workers are already finding with employers making use of generative AI to worsen the labor conditions in their industries may become entrenched, especially if artists fail in their lawsuits against AI companies for training on their work without permission. But it could be far worse than that. Microsoft is already partnering with Palantir to feed generative AI into militaries and intelligence agencies, while governments around the world are looking at how they can implement generative AI to reduce the cost of service delivery, often without effective consideration of the potential harms that can come of relying on tools that are well known to output false information. This is a problem Resisting AI author Dan McQuillan has pointed to as a key reason why we must push back against these technologies. There are already countless examples of algorithmic systems have been used to harm welfare recipients, childcare benefit applicants, immigrants, and other vulnerable groups. We risk a repetition, if not an intensification, of those harmful outcomes. When the AI bubble bursts, investors will lose money, companies will close, and workers will lose jobs. Those developments will be splashed across the front pages of major media organizations and will receive countless hours of public discussion. But it’s those lasting harms that will be harder to immediately recognize, and that could fade as the focus moves on to whatever Silicon Valley places starts pushing as the foundation of its next investment cycle. All the benefits Altman and his fellow AI boosters promised will fade, just as did the promises of the gig economy, the metaverse, the crypto industry, and countless others. But the harmful uses of the technology will stick around, unless concerted action is taken to stop those use cases from lingering long after the bubble bursts.
16 August 2024
64 notes · View notes
contemplatingoutlander · 10 months ago
Text
It is disturbing that Musk's AI chatbot is spreading false information about the 2024 election. "Free speech" should not include disinformation. We cannot survive as a nation if millions of people live in an alternative, false reality based on disinformation and misinformation spread by unscrupulous parties. The above link is from the Internet Archive, so anyone can read the entire article. Below are some excerpts:
Five secretaries of state plan to send an open letter to billionaire Elon Musk on Monday, urging him to “immediately implement changes” to X’s AI chatbot Grok, after it shared with millions of users false information suggesting that Kamala Harris was not eligible to appear on the 2024 presidential ballot. The letter, spearheaded by Minnesota Secretary of State Steve Simon and signed by his counterparts Al Schmidt of Pennsylvania, Steve Hobbs of Washington, Jocelyn Benson of Michigan and Maggie Toulouse Oliver of New Mexico, urges Musk to “immediately implement changes to X’s AI search assistant, Grok, to ensure voters have accurate information in this critical election year.” [...] The secretaries cited a post from Grok that circulated after Biden stepped out of the race: “The ballot deadline has passed for several states for the 2024 election,” the post read, naming nine states: Alabama, Indiana, Michigan, Minnesota, New Mexico, Ohio, Pennsylvania, Texas and Washington. Had the deadlines passed in those states, the vice president would not have been able to replace Biden on the ballot. But the information was false. In all nine states, the ballot deadlines have not passed and upcoming ballot deadlines allow for changes to candidates. [...] Musk launched Grok last year as an anti-“woke” chatbot, professing to be frustrated by what he says is the liberal bias of ChatGPT. In contrast to AI tools built by Open AI, Microsoft and Google, which are trained to carefully navigate controversial topics, Musk said he wanted Grok to be unfiltered and “answer spicy questions that are rejected by most other AI systems.” [...] Secretaries of state are grappling with an onslaught of AI-driven election misinformation, including deepfakes, ahead of the 2024 election. Simon testified on the subject before the Senate Rules and Administration Committee last year. [...] “It’s important that social media companies, especially those with global reach, correct mistakes of their own making — as in the case of the Grok AI chatbot simply getting the rules wrong,” Simon added. “Speaking out now will hopefully reduce the risk that any social media company will decline or delay correction of its own mistakes between now and the November election.” [color emphasis added]
67 notes · View notes
justforbooks · 4 months ago
Text
Tumblr media
The DeepSeek panic reveals an AI world ready to blow❗💥
The R1 chatbot has sent the tech world spinning – but this tells us less about China than it does about western neuroses
The arrival of DeepSeek R1, an AI language model built by the Chinese AI lab DeepSeek, has been nothing less than seismic. The system only launched last week, but already the app has shot to the top of download charts, sparked a $1tn (£800bn) sell-off of tech stocks, and elicited apocalyptic commentary in Silicon Valley. The simplest take on R1 is correct: it’s an AI system equal in capability to state-of-the-art US models that was built on a shoestring budget, thus demonstrating Chinese technological prowess. But the big lesson is perhaps not what DeepSeek R1 reveals about China, but about western neuroses surrounding AI.
For AI obsessives, the arrival of R1 was not a total shock. DeepSeek was founded in 2023 as a subsidiary of the Chinese hedge fund High-Flyer, which focuses on data-heavy financial analysis – a field that demands similar skills to top-end AI research. Its subsidiary lab quickly started producing innovative papers, and CEO Liang Wenfeng told interviewers last November that the work was motivated not by profit but “passion and curiosity”.
This approach has paid off, and last December the company launched DeepSeek-V3, a predecessor of R1 with the same appealing qualities of high performance and low cost. Like ChatGPT, V3 and R1 are large language models (LLMs): chatbots that can be put to a huge variety of uses, from copywriting to coding. Leading AI researcher Andrej Karpathy spotted the company’s potential last year, commenting on the launch of V3: “DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget.” (That quoted budget was $6m – hardly pocket change, but orders of magnitude less than the $100m-plus needed to train OpenAI’s GPT-4 in 2023.)
R1’s impact has been far greater for a few different reasons.
First, it’s what’s known as a “chain of thought” model, which means that when you give it a query, it talks itself through the answer: a simple trick that hugely improves response quality. This has not only made R1 directly comparable to OpenAI’s o1 model (another chain of thought system whose performance R1 rivals) but boosted its ability to answer maths and coding queries – problems that AI experts value highly. Also, R1 is much more accessible. Not only is it free to use via the app (as opposed to the $20 a month you have to pay OpenAI to talk to o1) but it’s totally free for developers to download and implement into their businesses. All of this has meant that R1’s performance has been easier to appreciate, just as ChatGPT’s chat interface made existing AI smarts accessible for the first time in 2022.
Second, the method of R1’s creation undermines Silicon Valley’s current approach to AI. The dominant paradigm in the US is to scale up existing models by simply adding more data and more computing power to achieve greater performance. It’s this approach that has led to huge increases in energy demands for the sector and tied tech companies to politicians. The bill for developing AI is so huge that techies now want to leverage state financing and infrastructure, while politicians want to buy their loyalty and be seen supporting growing companies. (See, for example, Trump’s $500bn “Stargate” announcement earlier this month.) R1 overturns the accepted wisdom that scaling is the way forward. The system is thought to be 95% cheaper than OpenAI’s o1 and uses one tenth of the computing power of another comparable LLM, Meta’s Llama 3.1 model. To achieve equivalent performance at a fraction of the budget is what’s truly shocking about R1, and it’s this that has made its launch so impactful. It suggests that US companies are throwing money away and can be beaten by more nimble competitors.
But after these baseline observations, it gets tricky to say exactly what R1 “means” for AI. Some are arguing that R1’s launch shows we’re overvaluing companies like Nvidia, which makes the chips integral to the scaling paradigm. But it’s also possible the opposite is true: that R1 shows AI services will fall in price and demand will, therefore, increase (an economic effect known as Jevons paradox, which Microsoft CEO Satya Nadella helpfully shared a link to on Monday). Similarly, you might argue that R1’s launch shows the failure of US policy to limit Chinese tech development via export controls on chips. But, as AI policy researcher Lennart Heim has argued, export controls take time to work and affect not just AI training but deployment across the economy. So, even if export controls don’t stop the launches of flagships systems like R1, they might still help the US retain its technological lead (if that’s the outcome you want).
All of this is to say that the exact effects of R1’s launch are impossible to predict. There are too many complicating factors and too many unknowns to say what the future holds. However, that hasn’t stopped the tech world and markets reacting in a frenzy, with CEOs panicking, stock prices cratering, and analysts scrambling to revise predictions for the sector. And what this really shows is that the world of AI is febrile, unpredictable and overly reactive. This a dangerous combination, and if R1 doesn’t cause a destructive meltdown of this system, it’s likely that some future launch will.
Daily inspiration. Discover more photos at Just for Books…?
27 notes · View notes
dysthymia-wet-specimen · 17 days ago
Text
NOT A PRO TRUMP POST I AM ANTI TRUMP AND ANTI CONSERVATIVE. YOU CAN'T BE A FEMINIST AND PRO TRUMP
tumblr, especially US-based feminist blogs, what do you think about this?
TDLR: Trump signed an act into law banning sexual content being shared without consent, including AI deepfakes (so making it illegal to share ai porn of adults made without their consent, was always illegal for kids) and forces it to be deleted/taken down by tech companies within 2 days of reporting. (article linked below so you can read for yourself)
i'm very wary of the republican party pushing any legislation right now, especially any that tries to "protect women" (often used as a dogwhistle to conceal republican sexual misconduct as well as target and fearmonger against poc and lgbt people, all while the party is run by a literal rapist), but I can't seem to find any kind of "catch" to this law. It was supported by both parties.
so my questions are:
what are your thoughts on this? is there something i'm missing here? i'm honestly just shocked to hear that there's actually victims being protected under this administration (obviously this isn't something trump invented, just signed, but still. i'm glad that it got passed) but i'm worried there's a catch i'm missing.
again, super important to realize trump shouldn't be congratulated for this nor the conservative party. but this is a big win for protecting sexual violence victims to me. hopefully other countries will implement similar policies.
i myself had an nsfw ai chatbot made of me (just nsfw text, no images, but it did use my real first name and a pic of my face) using a pic of my face as a minor. (i was a minor when it was made and so was the person who made it who was also younger than me, still evil tho) not the same situation ofc, and it got taken down, but i hope this makes it easier for victims of this kind of cyber-sexual harassment to know what they can do about it.
13 notes · View notes
reasonsforhope · 1 year ago
Text
When Swiss cardiologist Thomas F. Lüscher attended an international symposium in Turin, Italy, last summer, he encountered an unusual “attendee:” Suzanne, Chat GPT’s medical “assistant.” Suzanne’s developers were eager to demonstrate to the specialists how well their medical chatbot worked, and they asked the cardiologists to test her. 
An Italian cardiology professor told the chatbot about the case of a 27-year-old patient who was taken to his clinic in unstable condition. The patient had a massive fever and drastically increased inflammation markers. Without hesitation, Suzanne diagnosed adult-onset Still’s disease. “I almost fell off my chair because she was right,” Lüscher remembers. “This is a very rare autoinflammatory disease that even seasoned cardiologists don’t always consider.”
Lüscher — director of research, education and development and consultant cardiologist at the Royal Brompton & Harefield Hospital Trust and Imperial College London and director of the Center for Molecular Cardiology at the University of Zürich, Switzerland — is convinced that artificial intelligence is making cardiovascular medicine more accurate and effective. “AI is not only the future, but it is already here,” he says. “AI and machine learning are particularly accurate in image analysis, and imaging plays an outsize role in cardiology. AI is able to see what we don’t see. That’s impressive.” 
At the Royal Brompton Hospital in London, for instance, his team relies on AI to calculate the volume of heart chambers in MRIs, an indication of heart health. “If you calculate this manually, you need about half an hour,” Lüscher says. “AI does it in a second.” 
AI-Assisted Medicine
Few patients are aware of how significantly AI is already determining their health care. The Washington Post tracks the start of the boom of artificial intelligence in health care to 2018. That’s when the Food and Drug Administration approved the IDx-DR, the first independent AI-based diagnostic tool, which is used to screen for diabetic retinopathy. Today, according to the Post, the FDA has approved nearly 700 artificial intelligence and machine learning-enabled medical devices.
The Mayo Clinic in Rochester, Minnesota, is considered the worldwide leader in implementing AI for cardiovascular care, not least because it can train its algorithms with the (anonymized) data of more than seven million electrocardiograms (ECG). “Every time a patient undergoes an ECG, various algorithms that are based on AI show us on the screen which diagnoses to consider and which further tests are recommended,” says Francisco Lopez-Jimenez, director of the Mayo Clinic’s Cardiovascular Health Clinic. “The AI takes into account all the factors known about the patient, whether his potassium is high, etc. For example, we have an AI-based program that calculates the biological age of a person. If the person in front of me is [calculated to have a biological age] 10 years older than his birth age, I can probe further. Are there stressors that burden him?”
Examples where AI makes a sizable difference at the Mayo Clinic include screening ECGs to detect specific heart diseases, such as ventricular dysfunction or atrial fibrillation, earlier and more reliably than the human eye. These conditions are best treated early, but without AI, the symptoms are largely invisible in ECGs until later, when they have already progressed further...
Antioniades’ team at the University of Oxford’s Radcliffe Department of Medicine analyzed data from over 250,000 patients who underwent cardiac CT scans in eight British hospitals. “Eighty-two percent of the patients who presented with chest pain had CT scans that came back as completely normal and were sent home because doctors saw no indication for a heart disease,” Antioniades says. “Yet two-thirds of them had an increased risk to suffer a heart attack within the next 10 years.” In a world-first pilot, his team developed an AI tool that detects inflammatory changes in the fatty tissues surrounding the arteries. These changes are not visible to the human eye. But after training on thousands of CT scans, AI learned to detect them and predict the risk of heart attacks. “We had a phase where specialists read the scans and we compared their diagnosis with the AI’s,” Antioniades explains. “AI was always right.” These results led to doctors changing the treatment plans for hundreds of patients. “The key is that we can treat the inflammatory changes early and prevent heart attacks,” according to Antioniades. 
The British National Health Service (NHS) has approved the AI tool, and it is now used in five public hospitals. “We hope that it will soon be used everywhere because it can help prevent thousands of heart attacks every year,” Antioniades says. A startup at Oxford University offers a service that enables other clinics to send their CT scans in for analysis with Oxford’s AI tool.
Similarly, physician-scientists at the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center in Los Angeles use AI to analyze echograms. They created an algorithm that can effectively identify and distinguish between two life-threatening heart conditions that are easy to overlook: hypertrophic cardiomyopathy and cardiac amyloidosis. “These two heart conditions are challenging for even expert cardiologists to accurately identify, and so patients often go on for years to decades before receiving a correct diagnosis,” David Ouyang, cardiologist at the Smidt Heart Institute, said in a press release. “This is a machine-beats-man situation. AI makes the sonographer work faster and more efficiently, and it doesn’t change the patient experience. It’s a triple win.”
Current Issues with AI Medicine
However, using artificial intelligence in clinical settings has disadvantages, too. “Suzanne has no empathy,” Lüscher says about his experience with Chat GPT. “Her responses have to be verified by a doctor. She even says that after every diagnosis, and has to, for legal reasons.”
Also, an algorithm is only as accurate as the information with which it was trained. Lüscher and his team cured an AI tool of a massive deficit: Women’s risk for heart attacks wasn’t reliably evaluated because the AI had mainly been fed with data from male patients. “For women, heart attacks are more often fatal than for men,” Lüscher says. “Women also usually come to the clinic later. All these factors have implications.” Therefore, his team developed a more realistic AI prognosis that improves the treatment of female patients. “We adapted it with machine learning and it now works for women and men,” Lüscher explains. “You have to make sure the cohorts are large enough and have been evaluated independently so that the algorithms work for different groups of patients and in different countries.” His team made the improved algorithm available online so other hospitals can use it too...
[Lopez-Jimenez at the Mayo Clinic] tells his colleagues and patients that the reliability of AI tools currently lies at 75 to 93 percent, depending on the specific diagnosis. “Compare that with a mammogram that detects breast tumors with an accuracy of 85 percent,” Lopez-Jimenez says. “But because it’s AI, people expect 100 percent. That simply does not exist in medicine.”
And of course, another challenge is that few people have the resources and good fortune to become patients at the world’s most renowned clinics with state-of-the-art technology.
What Comes Next
“One of my main goals is to make this technology available to millions,” Lopez-Jimenez says. He mentions that Mayo is trying out high-tech stethoscopes to interpret heart signals with AI. “The idea is that a doctor in the Global South can use it to diagnose cardiac insufficiency,” Lopez-Jimenez explains. “It is already being tested in Nigeria, the country with the highest rate of genetic cardiac insufficiency in Africa. The results are impressively accurate.” 
The Mayo Clinic is also working with doctors in Brazil to diagnose Chagas disease with the help of AI reliably and early. “New technology is always more expensive at the beginning,” Lopez-Jimenez cautions, “but in a few years, AI will be everywhere and it will make diagnostics cheaper and more accurate.”
And the Children’s National Hospital in Washington developed a portable AI device that is currently being tested to screen children in Uganda for rheumatic heart disease, which kills about 400,000 people a year worldwide. The new tool reportedly has an accuracy of 90 percent. 
Both Lopez-Jimenez and Lüscher are confident that AI tools will continue to improve. “One advantage is that a computer can analyze images at 6 a.m. just as systematically as after midnight,” Lüscher points out. “A computer doesn’t get tired or have a bad day, whereas sometimes radiologists overlook significant symptoms. AI learns something and never forgets it.”
-via Reasons to Be Cheerful, March 1, 2024. Headers added by me.
--
Note:
Okay, so I'm definitely not saying that everything with AI medicine will go right, and there won't be any major issues. That's definitely not the case (the article talks about some of those issues). But regulation around medicines is generally pretty tight, and
And if it goes right, this could be HUGE for disabled people, chronically ill people, and people with any of the unfortunately many marginalizations that make doctors less likely to listen.
This could shave years off of the time it takes people to get the right diagnosis. It could get answers for so many people struggling with unknown diseases and chronic illness. If we compensate correctly, it could significantly reduce the role of bias in medicine. It could also make testing so much faster.
(There's a bunch of other articles about all of the ways that AI diagnoses are proving more sensitive and more accurate than doctors. This really is the sort of thing that AI is actually good at - data evaluation and science, not art and writing.)
This decade really is, for many different reasons, the beginning of the next revolution in medicine. Luckily, medicine is mostly pretty well-regulated - and of course that means very long testing phases. I think we'll begin to really see the fruits of this revolution in the next 10 to 15 years.
141 notes · View notes
r0bita · 1 year ago
Text
Hahaha, yeah! Google's AI Overview is telling women that it's okay to smoke while pregnant and that we should eat rocks-
People will probably die from this.
Tumblr media
If a chatbot can encourage a man to "sacrifice himself to stop climate change" only last year, I can't imagine what other fucked up things could have arised from this shit.
Misinformation on the internet already caused enough grief for a lot of vulnerable people LONG before ai and chatbots were implemented. A lot of us who grew up dependant on search browsers and internet access are going to be screwed by this. AI Overview may very well have already started killing people. And guess what? No amount of intelligence and experience is gonna save you, your aging grandmother, or your impressionable children from this nonsense because even if you don't fall for it, that doesn't mean it won't effect you one way or another.
I think it goes without saying that whoever is responsible for this should immediately start burning in Hell.
56 notes · View notes
qweerhet · 7 months ago
Note
I see some of your pro-ai stuff, and I also see that you're very good at explaining things, so I have some concerns about ai that I'd like for you to explain if it's okay.
I'm very worried about the amount of pollution it takes to make an ai generated image, story, video, etc. I'm also very worried about ai imagery being used to spread disinformation.
Correct me if I'm wrong, but you seem to go by the stance that since we can't un-create ai, we should just try our best to manage. How do we manage things like disinformation and massive amounts of pollution? To be fair, I actually don't know the exact amount of pollution ai generated prompts make.
so, first off: the environmental devastation argument is so incorrect, i would honestly consider it intellectually dishonest. here is a good, thorough writeup of the issue.
the tl;dr is that trying to discuss the "environmental cost of AI" as one monolithic thing is incoherent; AI is an umbrella term that refers to a wide breadth of both machine-learning research and, like, random tech that gets swept up in the umbrella as a marketing gimmick. when most people doompost about the environmental cost of AI, they're discussing image generation programs and chat interfaces in particular, and the fact is that running these programs on your computer eats about as much energy as, like, playing an hour of skyrim. bluntly, i consider this argument intellectually dishonest from anyone who does not consider it equally unethical to play skyrim.
the vast majority of the environmental cost of AI such as image generation and chat interfaces comes from implementation by large corporations. this problem isn't tractable by banning the tool; it's a structural problem baked into the existence of massive corporations and the current phase of capitalism we're in. prior to generative AI becoming a worldwide cultural trend, corporations were still responsible for that much environmental devastation, primarily to the end of serving ads--and like. the vast majority of use cases corporations are twisting AI to fit boil down to serving ads. essentially, i think focusing on the tool in this particular case is missing the forest for the trees; as long as you're not addressing the structural incentives for corporations to blindly and mindlessly participate in unsustainable extractivism, they will continue to use any and all tools to participate in such, and i am equally concerned about the energy spent barraging me with literally dozens and dozens of digital animated billboards in a ten-mile radius as i am with the energy spent getting a chatbot to talk up their product to me.
moving onto the disinformation issue: actually, yes, i'm very concerned about that. i don't have any personal opinions on how to manage it, but it's a very strong concern of mine. lowering the skill floor for production of media does, necessarily, mean a lot of bad actors are now capable of producing a much larger glut of malicious content, much faster.
i do think that, historically speaking, similar explosions of disinformation & malicious media haven't been socially managed by banning the tool nor by shaming those who use it for non-malicious purposes--like, when it was adopted for personal use, the internet itself created a sudden huge explosion of spam and disinformation as never before seen in human history, but "get rid of the internet" was never a tractable solution to this, and "shame people you see using the internet" just didn't do anything for the problem.
wish i could be more helpful on solutions for that one--it's just not a field i have any particular knowledge in, but if there's anyone reading who'd like to add on with information about large-scale regulation of the sort of broad field of malicious content i'm discussing, feel free.
28 notes · View notes