#embedded communication
Explore tagged Tumblr posts
campuscomponent · 9 days ago
Text
All‑in‑One GSM/GPS Modules for Tracking & IoT
These combo modules integrate cellular (GSM) connectivity with GPS positioning, enabling seamless real‑time tracking and remote communication. Ideal for vehicle telematics, asset monitoring, and field data logging, they simplify hardware design by combining connectivity and location services in a compact unit.
0 notes
knet2thorn · 1 year ago
Text
https://www.futureelectronics.com/p/semiconductors--comm-products--i2c/pca9532pw-118-nxp-5033862
16-bit I2C-bus LED Dimmer, Embedded communication, image processing,
PCA9532 Series 5.5 V 350 uA 400kHz SMT 16-bit I2C-bus LED Dimmer - TSSOP-24
1 note · View note
rch2llardd · 1 year ago
Text
https://www.futureelectronics.com/p/semiconductors--comm-products--can/mcp2551t-i-sn-microchip-5971353
High-Speed CAN Transceiver, can transceiver circuit, Can Power Systems
MCP2551 Series 5.5 V 1 Mb/s Surface Mount High-Speed CAN Transceiver - SOIC-8
1 note · View note
stlle2ista · 1 year ago
Text
https://www.futureelectronics.com/p/semiconductors--comm-products--i2c/pca9532pw-118-nxp-5033862
I2c bus, Embedded communication, Isolated CAN Transceiver ICs
PCA9532 Series 5.5 V 350 uA 400kHz SMT 16-bit I2C-bus LED Dimmer - TSSOP-24
1 note · View note
divinit3a · 5 months ago
Text
Tumblr media
just keep playing along ch 11 - trash & treasure (ao3 link)
Tumblr media
203 notes · View notes
queer-jewish-spoonie · 10 months ago
Text
Today at work I rang up an older Jewish lady. I noticed her magen david, and complimented it. She froze for a second and then relaxed, and asked me where mine was. I showed her my necklace and we had a short conversation. She said that she could tell all the way from the queue line that I was Jewish. At one point she said, "these days we just want to-" and she closed her shirt a bit to hide her magen. I wish I weren't so socially awkward because instead of nodding along, I would have told her, "no, we can't hide, not anymore, not again." I wish I could have hugged her and told her how much she- a total stranger- means to me. Every time I run into another Jew when I'm not expecting it, it takes my breath away. I'm reminded of why I converted- because I fell in love with Judaism, the Jewish people, Jewish culture, Jewish everything. Jews, I love you so much. We are amazing. We have each other. עם ישראל חי
143 notes · View notes
shallowseeker · 2 months ago
Text
the way that even in early usa, extended families often lived together or within walking distance
the idea that each nuclear family should have its own house, own appliances, own everything and that adult children should move out at 18 is a relatively recent post-WWII, suburbanization-era invention
and it just so happens to be highly profitable
#segmentation of the customer even#this is why charming acres and 1950s features the way it does#1950s popularized the image of the self-contained upwardly mobile nuclear family#the game is rigged#extended family living was increasingly framed as backward immigrant or rural#suburban nuclear family became a national identity project and it survives in marketing materials and specific targeted consumerism#consumerism Cold War ideology and gender roles (housewife breadwinner etc.)#bc from a business perspective splitting extended families into individual homes was a gold mine#not owning a home not having a perfect family unit needing help from relatives staying with your parents past 18#or relying on community all became loaded with stigma#the use of words like codependent and socially incestuous applied liberally furthered the agenda#pop psychology gets over applied#they’re often over-applied in contexts where people are simply staying close surviving together or choosing mutual care#what gets labeled as pathology is not weird at all and historically common and culturally valid… it’s just not as profitable#making them question bonds that may be loving supportive and necessary#thinking about this a lot being more embedded in an extended network again#anyway spn does this well!#abusing the lower class then calling them Weird for huddling together when upper classes are in fact the ones who are flagrantly nepotism#when in fact upper class is Weirder and 9-10 times the one salivating over the Idea is upper class#i feel like if you miss this you miss Everything#surburbia is weird and isolating on purpose
24 notes · View notes
cognitohazardous · 8 months ago
Text
Tumblr media
you ever think about how video embeds changed the internet as a whole and then youtube decided to break every embed ever so they can more easily control viewers
26 notes · View notes
etchessketches · 1 year ago
Text
Tumblr media Tumblr media
he's so important to me
#i guess i need to watch the anime but super's manga has just been a self-indulgent fever dream for me from start to finish#100000/10 absolutely perfect so validating so extremely catered to my tastes and headcanons and analyses and humor#so fucking funny and emotional and intense and goofy and beautifully drawn#my beautiful son getting to finally fucking see his HARD won character growth fucking shine and choose love and choose to be loved!!!!!!#Goku just being Goku Vegeta being Team Dad Piccolo being Team Grandpa Bulma being a fucking superstar keeping everybody organized and fed#god i love this squad i love this series i love these dumbasses and their struggles and their triumphs and their stupid childish bonding#I love that Toriyama just spent the last several years reminding the class that DB as a whole has always been an ACTION-COMEDY about LOVE#and I'm SO sad that the z anime really never did it justice in that sense because of having to fill time with dramatic tension but god. GOD#THE MANGA HAS ALWAYS BEEN SO CLEAR ON THAT THESIS.#Just all about Restorative Justice and Community and CARING even when you wish SO MUCH that you didn't care but yoU DO GODDAMMIT!!!#SUCH a great series I'm so sad it took losing mr t for me to finally read it but my god I needed to read it now and I'm so glad he wrote it#and i'm SO glad he wrote it Exactly Like This#once again rip to a legend i'm caught up and crying it's so perfect it's SO everything I've wanted to see onscreen and embedded in canon#and canon isn't everything but it still feels gREAT to be SO 1:1 on the same page with an author re: how you interpret your blorbo yknow???#been rotating this man in my head for 25 years and Mr Toriyama just mWAH kissed me on the forehead about it#anyway enough tag rambles I'm off again aklsjla#bonus for that kenpachi shit and letting him say 'sorry dude I can't be cold and numb anymore but this is still cathartic as fuck lol' like#mr t i hope you see the HIGHEST tier of heaven for that (and obviously for like everything all of it the whole life you led)#dbtag
81 notes · View notes
starryelem · 6 months ago
Text
I drew this mid life crisis, the urge to punch him made life bearable<3
(His shirt says: Lmao if you read this, you gay”)
Tumblr media
14 notes · View notes
lazyjellyfish300 · 4 months ago
Text
Tumblr media
shiuelly ᧔♥︎᧓
"tell me every bad thing you did, and let me love you anyway."
@kazuluvr thank you so much for this BEAUTIFUL depiction of Shiu and I that I will forever treasure 🥹 you're the best! 💕💕 Please support her she's so talented and wonderful to work with. 💕💕💕💕
11 notes · View notes
softichill · 7 months ago
Text
Object Show playlist I made (it's 19 hours
7 notes · View notes
bubervitch · 8 months ago
Text
super super fantastic article about the communist take on mutual aid. incredibly informative and detailed while still totally understanding the drive to take immediate action that’s usually behind those who dedicate their time to mutual aid societies as their main/primary mode of organizing
https://www.marxist.ca/article/a-communist-critique-of-mutual-aid
5 notes · View notes
jcmarchi · 3 months ago
Text
AI Doesn’t Necessarily Give Better Answers If You’re Polite
New Post has been published on https://thedigitalinsider.com/ai-doesnt-necessarily-give-better-answers-if-youre-polite/
AI Doesn’t Necessarily Give Better Answers If You’re Polite
Public opinion on whether it pays to be polite to AI shifts almost as often as the latest verdict on coffee or red wine – celebrated one month, challenged the next. Even so, a growing number of users now add ‘please’ or ‘thank you’ to their prompts, not just out of habit, or concern that brusque exchanges might carry over into real life, but from a belief that courtesy leads to better and more productive results from AI.
This assumption has circulated between both users and researchers, with prompt-phrasing studied in research circles as a tool for alignment, safety, and tone control, even as user habits reinforce and reshape those expectations.
For instance, a 2024 study from Japan found that prompt politeness can change how large language models behave, testing GPT-3.5, GPT-4, PaLM-2, and Claude-2 on English, Chinese, and Japanese tasks, and rewriting each prompt at three politeness levels. The authors of that work observed that ‘blunt’ or ‘rude’ wording led to lower factual accuracy and shorter answers, while moderately polite requests produced clearer explanations and fewer refusals.
Additionally, Microsoft recommends a polite tone with Co-Pilot, from a performance rather than a cultural standpoint.
However, a new research paper from George Washington University challenges this increasingly popular idea, presenting a mathematical framework that predicts when a large language model’s output will ‘collapse’, transiting from coherent to misleading or even dangerous content. Within that context, the authors contend that being polite does not meaningfully delay or prevent this ‘collapse’.
Tipping Off
The researchers argue that polite language usage is generally unrelated to the main topic of a prompt, and therefore does not meaningfully affect the model’s focus. To support this, they present a detailed formulation of how a single attention head updates its internal direction as it processes each new token, ostensibly demonstrating that the model’s behavior is shaped by the cumulative influence of content-bearing tokens.
As a result, polite language is posited to have little bearing on when the model’s output begins to degrade. What determines the tipping point, the paper states, is the overall alignment of meaningful tokens with either good or bad output paths – not the presence of socially courteous language.
An illustration of a simplified attention head generating a sequence from a user prompt. The model starts with good tokens (G), then hits a tipping point (n*) where output flips to bad tokens (B). Polite terms in the prompt (P₁, P₂, etc.) play no role in this shift, supporting the paper’s claim that courtesy has little impact on model behavior. Source: https://arxiv.org/pdf/2504.20980
If true, this result contradicts both popular belief and perhaps even the implicit logic of instruction tuning, which assumes that the phrasing of a prompt affects a model’s interpretation of user intent.
Hulking Out
The paper examines how the model’s internal context vector (its evolving compass for token selection) shifts during generation. With each token, this vector updates directionally, and the next token is chosen based on which candidate aligns most closely with it.
When the prompt steers toward well-formed content, the model’s responses remain stable and accurate; but over time, this directional pull can reverse, steering the model toward outputs that are increasingly off-topic, incorrect, or internally inconsistent.
The tipping point for this transition (which the authors define mathematically as iteration n*), occurs when the context vector becomes more aligned with a ‘bad’ output vector than with a ‘good’ one. At that stage, each new token pushes the model further along the wrong path, reinforcing a pattern of increasingly flawed or misleading output.
The tipping point n* is calculated by finding the moment when the model’s internal direction aligns equally with both good and bad types of output. The geometry of the embedding space, shaped by both the training corpus and the user prompt, determines how quickly this crossover occurs:
An illustration depicting how the tipping point n* emerges within the authors’ simplified model. The geometric setup (a) defines the key vectors involved in predicting when output flips from good to bad. In (b), the authors plot those vectors using test parameters, while (c) compares the predicted tipping point to the simulated result. The match is exact, supporting the researchers’ claim that the collapse is mathematically inevitable once internal dynamics cross a threshold.
Polite terms don’t influence the model’s choice between good and bad outputs because, according to the authors, they aren’t meaningfully connected to the main subject of the prompt. Instead, they end up in parts of the model’s internal space that have little to do with what the model is actually deciding.
When such terms are added to a prompt, they increase the number of vectors the model considers, but not in a way that shifts the attention trajectory. As a result, the politeness terms act like statistical noise: present, but inert, and leaving the tipping point n* unchanged.
The authors state:
‘[Whether] our AI’s response will go rogue depends on our LLM’s training that provides the token embeddings, and the substantive tokens in our prompt – not whether we have been polite to it or not.’
The model used in the new work is intentionally narrow, focusing on a single attention head with linear token dynamics – a simplified setup where each new token updates the internal state through direct vector addition, without non-linear transformations or gating.
This simplified setup lets the authors work out exact results and gives them a clear geometric picture of how and when a model’s output can suddenly shift from good to bad. In their tests, the formula they derive for predicting that shift matches what the model actually does.
Chatting Up..?
However, this level of precision only works because the model is kept deliberately simple. While the authors concede that their conclusions should later be tested on more complex multi-head models such as the Claude and ChatGPT series, they also believe that the theory remains replicable as attention heads increase, stating*:
‘The question of what additional phenomena arise as the number of linked Attention heads and layers is scaled up, is a fascinating one. But any transitions within a single Attention head will still occur, and could get amplified and/or synchronized by the couplings – like a chain of connected people getting dragged over a cliff when one falls.’
An illustration of how the predicted tipping point n* changes depending on how strongly the prompt leans toward good or bad content. The surface comes from the authors’ approximate formula and shows that polite terms, which don’t clearly support either side, have little effect on when the collapse happens. The marked value (n* = 10) matches earlier simulations, supporting the model’s internal logic.
What remains unclear is whether the same mechanism survives the jump to modern transformer architectures. Multi-head attention introduces interactions across specialized heads, which may buffer against or mask the kind of tipping behavior described.
The authors acknowledge this complexity, but argue that attention heads are often loosely-coupled, and that the sort of internal collapse they model could be reinforced rather than suppressed in full-scale systems.
Without an extension of the model or an empirical test across production LLMs, the claim remains unverified. However, the mechanism seems sufficiently precise to support follow-on research initiatives, and the authors provide a clear opportunity to challenge or confirm the theory at scale.
Signing Off
At the moment, the topic of politeness towards consumer-facing LLMs appears to be approached either from the (pragmatic) standpoint that trained systems may respond more usefully to polite inquiry; or that a tactless and blunt communication style with such systems risks to spread into the user’s real social relationships, through force of habit.
Arguably, LLMs have not yet been used widely enough in real-world social contexts for the research literature to confirm the latter case; but the new paper does cast some interesting doubt upon the benefits of anthropomorphizing AI systems of this type.
A study last October from Stanford suggested (in contrast to a 2020 study) that treating LLMs as if they were human additionally risks to degrade the meaning of language, concluding that ‘rote’ politeness eventually loses its original social meaning:
[A] statement that seems friendly or genuine from a human speaker can be undesirable if it arises from an AI system since the latter lacks meaningful commitment or intent behind the statement, thus rendering the statement hollow and deceptive.’
However, roughly 67 percent of Americans say they are courteous to their AI chatbots, according to a 2025 survey from Future Publishing. Most said it was simply ‘the right thing to do’, while 12 percent confessed they were being cautious – just in case the machines ever rise up.
* My conversion of the authors’ inline citations to hyperlinks. To an extent, the hyperlinks are arbitrary/exemplary, since the authors at certain points link to a wide range of footnote citations, rather than to a specific publication.
First published Wednesday, April 30, 2025. Amended Wednesday, April 30, 2025 15:29:00, for formatting.
2 notes · View notes
jakeperalta · 2 years ago
Text
I literally think being in the taylor fandom is making me a worse person like I am so not a hater at heart and yet I just get so irritated by the fandom that it makes me feel like the most negative bitter person :/
42 notes · View notes