#embedded AI
Explore tagged Tumblr posts
Text
Post-Cloud Careers: Why 'Edge Computing' Specialists Are Earning $250K in 2025
Cloud is Old News. The Edge is the New Frontier. In 2025, the cloud isn’t dead—but it’s no longer the hero of the story. With smart factories, autonomous vehicles, and hyper-personalized AR glasses flooding the market, centralized data centers just can’t keep up. Enter Edge Computing—the decentralized network architecture processing data at the source (your streetlight, toaster, or Tesla). And…
0 notes
Text
Generative Edge AI: The next frontier for AI Tech
Generative Edge AI’s Arrival An interesting crossroad is currently being crossed in Information Technology where computer, smartphone, and tablet hardware are becoming more and more powerful and at the same time, Generative AI algorithms, that previously needed multiple, powerful servers to run, are becoming more resource-efficient. Famously, China’s DeepSeek purportedly matches or even…
#AI developer philippines#ai development philippines#AI philippines#edge AI#embedded AI#Embedded AI Philippines#Embedded Gen AI#Embedded Generative AI#gen AI philippines#generative AI#generative ai philippines#Generative Edge AI#Private AI Philipines#RAG developer#RAG Developer Philippines#RAG philippines#Retrieval Augmented Generation Philippines
0 notes
Text
Embedded AI Market | Future Growth Aspect Analysis to 2030
The Embedded AI Market was valued at USD 8.9 billion in 2023 and will surpass USD 21.5 billion by 2030; growing at a CAGR of 13.5 % during 2024 - 2030.Embedded AI refers to the integration of artificial intelligence algorithms and processing capabilities directly into hardware devices. Unlike traditional AI, which often requires connection to powerful cloud computing systems, embedded AI operates locally on edge devices such as sensors, microcontrollers, or other hardware components. This enables real-time decision-making and data analysis with reduced latency and power consumption.
This convergence of AI and embedded systems is unlocking new possibilities for smarter, autonomous, and responsive devices that can analyze and act upon data instantly without needing to send it to remote servers for processing.
Market Growth and Key Drivers
The global embedded AI market is expanding rapidly, driven by several key factors:
Advancements in Edge Computing
The proliferation of edge computing has played a pivotal role in the growth of embedded AI. Edge devices with built-in AI capabilities are able to process data locally, reducing the need for constant communication with cloud servers. This is particularly crucial for applications requiring immediate decision-making, such as autonomous vehicles, drones, and industrial automation.
Increased Demand for IoT Devices
The Internet of Things (IoT) is a major contributor to the growth of embedded AI. IoT devices are embedded in everyday objects like smart home appliances, wearable devices, and industrial equipment, gathering data in real time. By integrating AI, these devices can offer predictive maintenance, enhanced user experiences, and optimized operational efficiency.
Read More about Sample Report: https://intentmarketresearch.com/request-sample/embedded-ai-market-3623.html
Enhanced AI Algorithms
AI algorithms have become more efficient and powerful, enabling them to operate in low-power, resource-constrained environments like embedded systems. With advancements in AI frameworks, such as TensorFlow Lite and PyTorch Mobile, the ability to deploy AI models on edge devices is now more accessible than ever.
Industry 4.0 and Smart Manufacturing
Industry 4.0 emphasizes automation, smart factories, and connected machinery. Embedded AI plays a critical role in optimizing processes in manufacturing, such as predictive maintenance, quality control, and energy management. Machines equipped with AI can autonomously monitor their own performance, identify inefficiencies, and make adjustments in real time.
Rise of Autonomous Systems
The push toward autonomous systems, especially in the automotive industry, is driving embedded AI adoption. Self-driving cars, drones, and robots rely on embedded AI to process vast amounts of sensor data, make real-time decisions, and navigate complex environments without human intervention.
Key Sectors Driving Embedded AI Adoption
Automotive Industry
The automotive industry is at the forefront of embedded AI adoption. AI-driven features like autonomous driving, advanced driver-assistance systems (ADAS), and predictive maintenance are all powered by embedded AI systems. These technologies enable cars to analyze real-time road conditions, detect potential hazards, and make instant decisions, enhancing safety and efficiency.
Healthcare
In healthcare, embedded AI is transforming medical devices and diagnostic tools. AI-powered wearables can monitor patients' vital signs in real time, providing healthcare professionals with actionable insights for early diagnosis and personalized treatment plans. Moreover, embedded AI systems in medical imaging devices can assist in detecting diseases like cancer with higher accuracy.
Consumer Electronics
From smart speakers to home security systems, embedded AI is driving innovation in the consumer electronics space. Devices are becoming more intuitive, offering personalized experiences through voice recognition, gesture control, and facial recognition technologies. These AI-driven enhancements have revolutionized how consumers interact with their devices.
Industrial Automation
Embedded AI in industrial automation is enabling smarter, more efficient factories. AI-powered sensors and controllers can optimize production processes, predict equipment failures, and reduce downtime. As industries move toward fully autonomous operations, embedded AI will play an integral role in managing complex industrial systems.
Challenges in the Embedded AI Market
Despite its rapid growth, the embedded AI market faces several challenges. Developing AI algorithms that can operate efficiently in resource-constrained environments is complex. Power consumption, heat generation, and the limited processing capabilities of embedded devices must all be carefully managed. Moreover, there are concerns around data privacy and security, particularly in industries handling sensitive information, such as healthcare and finance.
Another challenge is the lack of standardization across embedded AI platforms, which can hinder widespread adoption. To address this, industry stakeholders are collaborating on developing open standards and frameworks to streamline AI deployment in embedded systems.
Ask for Customization Report: https://intentmarketresearch.com/ask-for-customization/embedded-ai-market-3623.html
The Future of Embedded AI
The future of embedded AI looks promising, with continued advancements in hardware, AI algorithms, and edge computing technologies. As AI capabilities become more efficient and affordable, their integration into everyday devices will become increasingly ubiquitous. In the coming years, we can expect to see even greater adoption of embedded AI in smart cities, autonomous transportation systems, and advanced robotics.
Moreover, the convergence of 5G technology with embedded AI will further accelerate innovation. With faster, more reliable connectivity, edge devices equipped with AI will be able to process and transmit data more efficiently, unlocking new use cases across various industries.
Conclusion
The embedded AI market is revolutionizing industries by enabling devices to think, analyze, and act autonomously. As the demand for smarter, more responsive technology grows, embedded AI will continue to transform sectors such as automotive, healthcare, industrial automation, and consumer electronics. With its ability to provide real-time insights and decision-making at the edge, embedded AI is set to play a central role in the next wave of technological innovation.
0 notes
Text
3 Ways Apple Intelligence And Embedded AI Will Change Daily Life
Apple Intelligence exemplifies the embedded character...see more

#appleinnovation#apple intelligence#embedded ai#embedded#ai#ai art#apple#iphone#technology#tech#artificial intelligence#phone#change#daily life
0 notes
Text
I love my phone setup so much...you can tell I have a favourite-
I like her a normal amount. Credit under the cut
Syo - @funishment-time
Maid Tokomaru - @jackie-kawaii
Tokomugi - commission by @chomplicated
Toko/Syo transparent - @wolfmanritch
Tokomaru/Syomaru transparents - @princescar (can't wait for the keychain to arrive btw, so excited to put them in my itabag)
Toko lace - @rottenpuppet
Toko/Syomaru sketches - commission by @naegamisupremacy
Top and bottom moodboards - @socte-blue
Middle moodboard - @witchy-hut-silly-closet
Tokomaru Cupid - ych commission by @flandrepudding
Toko transparent - @shegs
Stun gun - @shegs
Misc - I got them through random transparent PNG blogs. I cannot link back to those they've been in my set up for ages.
The lace I use for my Toko/Syo ask blog. I can't quite remember where I found it but I made it transparent myself.
#my post#danganronpa#thh#trigger happy havoc#udg#ultra despair girls#tokomaru#syomaru#toko fukawa#syo fukawa#genocider syo#genocide jack#genocide jill#komaru naegi#phone#phone customization#for the record i dont use character ai often but ive kept it to look back on older chats cause a few years ago i used it for comfort#and also fucking around a little with stupid bots#ANYWAY#i hope the tags work properly in the post and i also hope to those tagged it isnt annoying#i just didnt wanna post this and not have stuff credited#edit: i didnt embed the links :/#but they are embedded now
23 notes
·
View notes
Text
ok more AI thoughts sorry i'm tagging them if you want to filter. we had a team meeting last week where everyone was raving about this workshop they'd been to where they learned how to use generative AI tools to analyze a spreadsheet, create a slide deck, and generate their very own personalized chatbot. one person on our team was like 'yeah our student workers are already using chatGPT to do all of their assignments for us' and another person on our team (whom i really respect!) was like 'that's not really a problem though right? when i onboard my new student workers next year i'm going to have them do a bunch of tasks with AI to start with to show them how to use it more effectively in their work.' and i was just sitting there like aaaaa aaaaaaaaa aaaaaaaaaaaaaa what are we even doing here.
here are some thoughts:
yes AI can automate mundane tasks that would've otherwise taken students longer to complete. however i think it is important to ask: is there value in learning how to do mundane tasks that require sustained focus and careful attention to detail even if you are not that interested in the subject matter? i can think of many times in my life where i have needed to use my capacity to pay attention even when i'm bored to do something carefully and well. and i honed that capacity to pay attention and do careful work through... you guessed it... practicing the skill of paying attention and doing careful work even when i was bored. like of course you can look at the task itself and say "this task is meaningless/boring for the student, so let's teach them how to automate it." but i think in its best form, working closely with students shares some things with parenting, in that you are not just trying to get them through a set list of tasks, you are trying to give them opportunities to develop decision-making frameworks and diverse skillsets that they can transfer to many different areas of their lives. so I think it is really important for us to pause and think about how we are asking them to work and what we are communicating to them when we immediately direct them to AI.
i also think that rushing to automate a boring task cuts out all the stuff that students learn or absorb or encounter through doing the task that are not directly tied to the task itself! to give an example: my coworker was like let's have them use AI to review a bunch of pages on our website to look for outdated info. we'll just give them the info that needs to be updated and then they can essentially use AI to find and replace each thing without having to look at the individual pages. to which i'm like... ok but let's zoom out a little bit further. first of all, as i said above, i think there is value in learning how to read closely and attentively so that you can spot inaccuracies and replace them with accurate information. second of all, i think the exercise of actually reviewing things closely with my own human eyes & brain can be incredibly valuable. often i will go back to old pages i've created or old workshops i've made, and when i look at them with fresh eyes, i'm like ohh wait i bet i can express this idea more clearly, or hang on, i actually think this example is a little more confusing and i've since thought of a better one to illustrate this concept, or whatever. a student worker reading through a bunch of pages to perform the mundane task of updating deadlines might end up spotting all kinds of things that can be improved or changed. LASTLY i think that students end up absorbing a lot about the organization they work for when they have to read through a bunch of webpages looking for information. the vast majority of students don't have a clear understanding of how different units within a complex organization like a university function/interact with each other or how they communicate their work to different stakeholders (students, faculty, administrators, parents, donors, etc.). reading closely through a bunch of different pages -- even just to perform a simple task like updating application deadlines -- gives the student a chance to absorb more knowledge about their own unit's inner workings and gain a sense of how its work connects to other parts of the university. and i think there is tremendous value in that, since students who have higher levels of navigational capital are likely to be more aware of the resources/opportunities available to them and savvier at navigating the complex organization of the university.
i think what this boils down to is: our culture encourages us to prize efficiency in the workplace over everything else. we want to optimize optimize optimize. but when we focus obsessively on a single task (and on the fastest, most efficient way to complete it), i think we can really lose sight of the web of potential skills to be learned and knowledge or experience to be gained around the task itself, which may seem "inefficient" or unrelated to the task but can actually be hugely important to the person's growth/learning. idk!!! maybe i am old man shouting at cloud!!! i am sure people said this about computers in the workplace too!!! but also WERE THEY WRONG??? I AM NOT SURE THEY WERE!!!!
and i have not even broached the other part of my concern which is that if we tell students it's totally fine to use AI tools in the workplace to automate tasks they find boring, i think we may be ceding the right to tell them they can't use AI tools in the classroom to automate learning tasks they find boring. like how can we tell them that THIS space (the classroom) is a sacred domain of learning where you must do everything yourself even if you find it slow and frustrating and boring. but as soon as you leave your class and head over to your on-campus job, you are encouraged to use AI to speed up everything you find slow, frustrating, and boring. how can we possibly expect students to make sense of those mixed messages!! and if we are already devaluing education so much by telling students that the sole purpose of pursuing an education is to get a well-paying job, then it's like, why NOT cheat your way through college using the exact same tools you'll be rewarded for using in the future job that you're going to college to get? ughhhhhhHHHHHHHHHHh.
#ai tag#my hope is that kids will eventually come to have the same relationship with genAI as they do with social media#where they understand that it's bad for them. and they wish it would go away.#unfortunately as with social media#i suspect that AI will be so embedded into everything at that point#that it will be extremely hard to turn it off/step away/not engage with it. since everyone else around you is using it all the time#ANYWAY. i am trying to remind myself of one of my old mantras which is#i should be most cautious when i feel most strongly that i am right#because in those moments i am least capable of thinking with nuance#so while i feel very strongly that i am Right about this#i think it is not always sooo productive to rant about it and in doing so solidify my own inner sense of Rightness#to the point where i can't think more openly/expansively and be curious#maybe i should now make myself write a post where i take a different perspective on this topic#to practice being more flexible
15 notes
·
View notes
Text

I’m finally finished up this ol’ fanart of @stephofromcabin12 ‘s Pjo oc, ✨Stephanie Overbaum of Cabin 12✨
My hand hurts but my brain demanded i finish this up since I’ve been working on it off and on for far WAY too long for my liking
I also just rlly love Steph, she’s so me in some ways
anyways…
🗣️ READ HER FANFIC ‘LITTLE CAMPER’ ON AO3!!! ITS REALLY GOOD AND IF YOU FOR SOME REASON REFUSE TO YOU WILL BE TURNED INTO A LOW GRADE ALCOHOLIC BEVERAGE!!!!🗣️ (I’ve already suffered that fate, as I’m typing this as a lime white claw🥀)
Her art is really good too and she’s really funny and cool please go support her!!!
#pjo fandom#pjo oc#pjo ocs#cabin 12#dionysus cabin#percy jackon and the olympians#percy jackson#my art#artists on tumblr#digital art#drawings#small artist#art#anti ai#fuck generative ai#support small artists#fanfic#pjo fanfic#ao3 fanfic#let this be a reminder to support each other in these crazy times#lime white claw core#LC has been embedded within my brain but I’m ok with it
10 notes
·
View notes
Text
once again i think this site's whole "never touch gen ai it's the devil's work" sentiment is just as stupid and harmful as the people who rely on it to write their grocery lists
#that one post that's making the rounds that's like 'i wouldn't even know where to find chatgpt that's how little i know about it'#like that's actually not good. the whole reason why gen ai is so harmful is because people don't actually know anything about it.#learn how it works and what it does so when it pops up casually in your workspaces you actually know how to handle those situations#and spread knowledge to others about what it can and can't do and why relying on it is a bad thing in the first place#i don't care if you're not tech savvy you can't just walk around like a headless chicken about this anymore#gen ai is becoming too embedded in everyday life. you need to start engaging with and learning about it#or this world is gonna get more and more fucked right before your eyes and you're gonna just continue to put your hands over them like that#solves anything#shut up hanna
7 notes
·
View notes
Note
🎩
Could you put on that dress that you went on your first "date" with Jones in?
Oh my stars, you mean that embarrassing "fake date"?!
*sigh*... Okay, as you wish! At least I know I looked good the whole time I was humiliating myself. 😅

#fortnite tumblrverse#closed#evie's dms#rp ask meme#// link embedded to the thread this appeared in!!#do not use without permission#do not use for ai
11 notes
·
View notes
Text

Hey I don’t think thags the right video for the article
#It’s one of those AI generated websites apparently lol#All the other videos they embedded were dog related except for this one#Reporting live
123 notes
·
View notes
Text










#ai generated#ai image#ai photography#stable diffusion#architecture#brutalism#brutalist#concrete#60s aesthetic#design#curved#hotel#resort#futuristic#black and white#bw#hillside#disk shape#mountain#highrise#pool#beach#vacation#embedded architecture
7 notes
·
View notes
Text
google is like a clingy ex because you can go non-contact for several months, but on the rare occasions you still manage to encounter it, it repeatedly tries to get you back in a really pathetic and whiny manner. and just doesn't stop.
#like PLEASE WHAT MORE DO YOU WANT FROM ME#i don't use chrome. i don't use gmail. i don't use anything google except the playstore embedded into my phone#but god forbid i want to fly under the radar and use a bit of youtube signed into an almost blank google account#every once in a while#without being bombarded by 'why don't you turn ur history back on??????' 'let us track ur every move again'#'we want to sell your information. use it to train ai. and other things you couldn't possibly imagine or consent to'#'please come back'#'we want you'#anti-google#gothihop speaks
18 notes
·
View notes
Text
ok the second last chapter was the best one because in fairness it actually did address encoded biases in both generative and predictive AI datasets and violent outcomes for oppressed groups in policing, healthcare, resource distribution, etc. and it did make mention of the horrific exploitation of workers in the neocolonial world in cleaning datasets, moderating virtual spaces, tagging, transcribing, and other digital sweatshop work.
but the problem is that the solutions offered are more women in STEM and better regulations... with the end goal always always always of accepting and facilitating the existence and forward motion of these technologies, just with more equitable working conditions and outcomes. early in the book, there's a great aside about how generative AI being used for new forms of image based sexual abuse causes incredible harm to those who experience misogyny and also is gobbling up energy and water at exponential rates to do so. but that environmental angle gets a few sentences and is never spoken of again in favour of boosting a kinder, more inclusive inevitable AI inundated future.
but like — the assumption that these technologies are both inevitable and reformable makes all the solutions offered untethered and idealistic!
profit is mentioned throughout the book, but the system of profit generation isn't mentioned by name once. so the problems of some machine learning systems get attributed to patriarchy and profit as if those two things are separate and ahistorical, instead of products of class society with its most recent expression in capitalism.
but yeah I mean it's not presenting itself as a Marxist analysis of AI and gendered violence so I know I'm asking it to do things it was never setting out to do. but still, it's wild how when you start to criticise technology as having class character it becomes glaring how few critiques of AI, both generative and predictive, are brave enough to actual state the obvious conclusions: not all technology can be harnessed to help the collective working class. some technology is at its root created to be harmful to the collective wellbeing of the working class and the natural ecosystems we are in and of.
technology isn't inherently agnostic. it isn't neutral and then progressive if harnessed correctly, and that idealist vision is only going to throw the people and entities capitalism most exploits into the furnace of the steam engine powering the technocapitalist death drive.
you can't build a future without misogyny using tools designed to capitalise on increasingly granular data gathered from ever-increasing tools of surveillance, to black-box algorithmic substitutions for human interaction and knowledge, to predate on marginalised communities to privatise and destroy their natural resources and public services, and to function on exploited labour of unending exposure to the most traumatising and community-destroying content. and we have to be ruthless in our analysis to determine which AI technologies are designed and reliant on those structures — because not all are!
you have to be brave enough to go through all that analysis and say the thing: if we want a future of technological progress that is actually free from misogyny, we can't build it with those tools that are built by and for the capitalist class and are inextricable from their interests and the oppression of other groups of people that capitalism needs to perpetuate.
some technology is not fit for purpose if our purpose is collective liberation.
#the old yarn: none of us are free unless all of us are free#anyway idk i read it because a comrade was reading it and this is my beat so i wanted to know what the take was#and i just think where it did focus on actual widespread and ubiquitous predatory and exploitative gen ai tech —#like gen ai relationship chat bots‚ gen ai deepfake software‚ and gen and predictive ai embedding in societal infrastructure —#it was at its best. but the sex robot obsession felt like it was there to juice up the book#bc talking about biases in automated welfare distribution isn't sensational enough?#like again yeah it was horrific imagery but devoting a full third+ of the book to it was a choice
2 notes
·
View notes
Text



So excited to have my hands of the first proper prototype of my NuaCam project. It's crazy to see just how far I have come in a few short months, growing this from a simple idea to a functional device. The goal is to build a camera which utilising ai stylisation to capture reality in a new light. Now I can focus on improving the ai side to try and create exciting styles to use. The first prototype was causing lots of lost hours debugging due to lose wires, so I bite the bullet and designed this pcb to help me develop the software side.
#embedded#technology#electronic project#electronics#startup#camera#ai photography#hobby#pcb#pcb assembly#electronic engineering#project#nuacam
17 notes
·
View notes
Text
AI Doesn’t Necessarily Give Better Answers If You’re Polite
New Post has been published on https://thedigitalinsider.com/ai-doesnt-necessarily-give-better-answers-if-youre-polite/
AI Doesn’t Necessarily Give Better Answers If You’re Polite
Public opinion on whether it pays to be polite to AI shifts almost as often as the latest verdict on coffee or red wine – celebrated one month, challenged the next. Even so, a growing number of users now add ‘please’ or ‘thank you’ to their prompts, not just out of habit, or concern that brusque exchanges might carry over into real life, but from a belief that courtesy leads to better and more productive results from AI.
This assumption has circulated between both users and researchers, with prompt-phrasing studied in research circles as a tool for alignment, safety, and tone control, even as user habits reinforce and reshape those expectations.
For instance, a 2024 study from Japan found that prompt politeness can change how large language models behave, testing GPT-3.5, GPT-4, PaLM-2, and Claude-2 on English, Chinese, and Japanese tasks, and rewriting each prompt at three politeness levels. The authors of that work observed that ‘blunt’ or ‘rude’ wording led to lower factual accuracy and shorter answers, while moderately polite requests produced clearer explanations and fewer refusals.
Additionally, Microsoft recommends a polite tone with Co-Pilot, from a performance rather than a cultural standpoint.
However, a new research paper from George Washington University challenges this increasingly popular idea, presenting a mathematical framework that predicts when a large language model’s output will ‘collapse’, transiting from coherent to misleading or even dangerous content. Within that context, the authors contend that being polite does not meaningfully delay or prevent this ‘collapse’.
Tipping Off
The researchers argue that polite language usage is generally unrelated to the main topic of a prompt, and therefore does not meaningfully affect the model’s focus. To support this, they present a detailed formulation of how a single attention head updates its internal direction as it processes each new token, ostensibly demonstrating that the model’s behavior is shaped by the cumulative influence of content-bearing tokens.
As a result, polite language is posited to have little bearing on when the model’s output begins to degrade. What determines the tipping point, the paper states, is the overall alignment of meaningful tokens with either good or bad output paths – not the presence of socially courteous language.
An illustration of a simplified attention head generating a sequence from a user prompt. The model starts with good tokens (G), then hits a tipping point (n*) where output flips to bad tokens (B). Polite terms in the prompt (P₁, P₂, etc.) play no role in this shift, supporting the paper’s claim that courtesy has little impact on model behavior. Source: https://arxiv.org/pdf/2504.20980
If true, this result contradicts both popular belief and perhaps even the implicit logic of instruction tuning, which assumes that the phrasing of a prompt affects a model’s interpretation of user intent.
Hulking Out
The paper examines how the model’s internal context vector (its evolving compass for token selection) shifts during generation. With each token, this vector updates directionally, and the next token is chosen based on which candidate aligns most closely with it.
When the prompt steers toward well-formed content, the model’s responses remain stable and accurate; but over time, this directional pull can reverse, steering the model toward outputs that are increasingly off-topic, incorrect, or internally inconsistent.
The tipping point for this transition (which the authors define mathematically as iteration n*), occurs when the context vector becomes more aligned with a ‘bad’ output vector than with a ‘good’ one. At that stage, each new token pushes the model further along the wrong path, reinforcing a pattern of increasingly flawed or misleading output.
The tipping point n* is calculated by finding the moment when the model’s internal direction aligns equally with both good and bad types of output. The geometry of the embedding space, shaped by both the training corpus and the user prompt, determines how quickly this crossover occurs:
An illustration depicting how the tipping point n* emerges within the authors’ simplified model. The geometric setup (a) defines the key vectors involved in predicting when output flips from good to bad. In (b), the authors plot those vectors using test parameters, while (c) compares the predicted tipping point to the simulated result. The match is exact, supporting the researchers’ claim that the collapse is mathematically inevitable once internal dynamics cross a threshold.
Polite terms don’t influence the model’s choice between good and bad outputs because, according to the authors, they aren’t meaningfully connected to the main subject of the prompt. Instead, they end up in parts of the model’s internal space that have little to do with what the model is actually deciding.
When such terms are added to a prompt, they increase the number of vectors the model considers, but not in a way that shifts the attention trajectory. As a result, the politeness terms act like statistical noise: present, but inert, and leaving the tipping point n* unchanged.
The authors state:
‘[Whether] our AI’s response will go rogue depends on our LLM’s training that provides the token embeddings, and the substantive tokens in our prompt – not whether we have been polite to it or not.’
The model used in the new work is intentionally narrow, focusing on a single attention head with linear token dynamics – a simplified setup where each new token updates the internal state through direct vector addition, without non-linear transformations or gating.
This simplified setup lets the authors work out exact results and gives them a clear geometric picture of how and when a model’s output can suddenly shift from good to bad. In their tests, the formula they derive for predicting that shift matches what the model actually does.
Chatting Up..?
However, this level of precision only works because the model is kept deliberately simple. While the authors concede that their conclusions should later be tested on more complex multi-head models such as the Claude and ChatGPT series, they also believe that the theory remains replicable as attention heads increase, stating*:
‘The question of what additional phenomena arise as the number of linked Attention heads and layers is scaled up, is a fascinating one. But any transitions within a single Attention head will still occur, and could get amplified and/or synchronized by the couplings – like a chain of connected people getting dragged over a cliff when one falls.’
An illustration of how the predicted tipping point n* changes depending on how strongly the prompt leans toward good or bad content. The surface comes from the authors’ approximate formula and shows that polite terms, which don’t clearly support either side, have little effect on when the collapse happens. The marked value (n* = 10) matches earlier simulations, supporting the model’s internal logic.
What remains unclear is whether the same mechanism survives the jump to modern transformer architectures. Multi-head attention introduces interactions across specialized heads, which may buffer against or mask the kind of tipping behavior described.
The authors acknowledge this complexity, but argue that attention heads are often loosely-coupled, and that the sort of internal collapse they model could be reinforced rather than suppressed in full-scale systems.
Without an extension of the model or an empirical test across production LLMs, the claim remains unverified. However, the mechanism seems sufficiently precise to support follow-on research initiatives, and the authors provide a clear opportunity to challenge or confirm the theory at scale.
Signing Off
At the moment, the topic of politeness towards consumer-facing LLMs appears to be approached either from the (pragmatic) standpoint that trained systems may respond more usefully to polite inquiry; or that a tactless and blunt communication style with such systems risks to spread into the user’s real social relationships, through force of habit.
Arguably, LLMs have not yet been used widely enough in real-world social contexts for the research literature to confirm the latter case; but the new paper does cast some interesting doubt upon the benefits of anthropomorphizing AI systems of this type.
A study last October from Stanford suggested (in contrast to a 2020 study) that treating LLMs as if they were human additionally risks to degrade the meaning of language, concluding that ‘rote’ politeness eventually loses its original social meaning:
[A] statement that seems friendly or genuine from a human speaker can be undesirable if it arises from an AI system since the latter lacks meaningful commitment or intent behind the statement, thus rendering the statement hollow and deceptive.’
However, roughly 67 percent of Americans say they are courteous to their AI chatbots, according to a 2025 survey from Future Publishing. Most said it was simply ‘the right thing to do’, while 12 percent confessed they were being cautious – just in case the machines ever rise up.
* My conversion of the authors’ inline citations to hyperlinks. To an extent, the hyperlinks are arbitrary/exemplary, since the authors at certain points link to a wide range of footnote citations, rather than to a specific publication.
First published Wednesday, April 30, 2025. Amended Wednesday, April 30, 2025 15:29:00, for formatting.
#2024#2025#ADD#Advanced LLMs#ai#AI chatbots#AI systems#Anderson's Angle#Artificial Intelligence#attention#bearing#Behavior#challenge#change#chatbots#chatGPT#circles#claude#coffee#communication#compass#complexity#content#Delay#direction#dynamics#embeddings#English#extension#focus
2 notes
·
View notes
Text
I think that people who were saying dream sounded like a tech bro in this space have never had to listen to an actual tech bro in their lives
#real tech bros could never talk about ai with so much care and compassion for humans embedded into every scentence they’re saying#like be so fr
3 notes
·
View notes