#Tech Tools for Programmers
Explore tagged Tumblr posts
Text
one of the many things that bothers me about goku luck is the fact that they have kenta (a minor) in a penitentiary full of adult convicts. where was the juvenile welfare officer and why are they not doing their job. hope they’re fired
#edit: realised i wrongly used the word penitentiary. sorry abt that. i meant prison#anyone who has heard what prison is like knows how terribly wrong this would go#also was he tried as a minor or did they try him as an adult for his crimes#idc if he’s got advanced hacking skills if a kid can create a virus that can threaten the world in his garage#then i’m sure million-dollar corporations and desperate governments can fund research into an antivirus#idk man it’s freaking ridiculous#like there was no reason to be doing all that#maybe i’m just a believer of restorative justice but#kenta’s clearly a genius. why not a scholarship to a great tech programme and a job with the government#he could do so much good in developing cybersecurity. internationally.#idk it just makes me sad that his whole life was over#and the system didn’t just fail him. it endangered him#either way goku luck could be such a great tool to criticise the criminal justice system#if anyone even CARES#you know i love The Silly but sometimes we need to Get Serious#paradox live#paralive#kenta mikoshiba#goku luck#gokuluck
34 notes
·
View notes
Text
Uncanny valley in an AI workplace apocalypse.
To some extent automation always seems to be a bait and switch.
Brian Merchant has published stories from workers about AI ruining or replacing jobs, and perhaps the most telling statement from one of the stories was someone who said: "Looking back, I wish my goal hadn't been to persuade managers but instead to organize fellow workers."
I like worker stories a lot because there's usually something in them that's entirely relatable, and yet also something that you find out about another profession or industry that's just mind-blowing, and often not in a good way. I remember describing a demoralizing job I had to David Graeber, a job he was familiar with from another angle (as described in his book Utopia of Rules). He said to me that he thought someone should probably write a sequel to Bullshit Jobs on what it’s like to "have a job that you know is just intrinsically wrong", and he said “hopefully not me!” and then unfortunately he died a few months after that. The problem is that such a book would likely surpass Debt: The First 5000 Years, as well as Ulysses and War & Peace, if you put all 3 together, because so many of today's jobs are intrinsically wrong, or at least have something intrinsically wrong with them.
Here are a few other eyebrow raising lines from stories that I recommend reading, if you have a stomach and appetite for horror.
Blood in the Machine AI Killed My Job: Tech workers Tech workers at TikTok, Google, and across the industry share stories about how AI is changing, ruining, or replacing their jobs. Brian Merchant Jun 25, 2025 "The speech and movement wasn't as clean as what I see in videos now, but it was close enough to leave me with an eerie sensation." (...) "These young engineers - squandering their opportunities to learn how things actually work - would briefly glance at the AI-generated code and/or explanation messages and continue producing more code when "it looks okay."" (...) "We haven't hit an actual recession in stock-prices due to aggressive cost and stock-price engineering everywhere, and cost-engineering typically tanks internal worker satisfaction." (...) "The irony here is two-fold: one, it does not seem that the people who left were victims of a turn to "vibe coding" and I suspect that the "AI efficiency" was used as an excuse to make us seem innovative even during this crisis. Two, this is a company whose product desperately needs real human care." (...) "It feels like every week there's a new sales pitch from a company claiming that their AI tool will solve all our problems—companies are desperate to claw back their AI investment, and they're hoping to find easy marks in the public sector." (...) "Looking back, I wish my goal hadn't been to persuade managers but instead to organize fellow workers." (...) "I found out that a colleague who had been struggling with a simple programming task for over a month—and refusing frequent offers for help—was struggling because they were trying to prompt an LLM for the solution and trying to understand the LLM's irrelevant and poorly-organized output." (...) "So I would say the private and public sector have this in common: the higher up you go in the organization, the more enthusiastic people are about "AI,” and the less they understand about the software, and (not coincidentally) the less they understand what their department actually does." (...) "Obviously we are told that we need to review the AI outputs, but it is starting to kill my enjoyment for my work; I love the creative problem solving aspect to programming, and now the majority of that work is trying to be passed onto AI, with me as the reviewer of the AI's work." (...) "Real use cases where AI can be used to do work that regular old programming could not are so rare that when I discovered one two weeks ago, I asked for a raise in the same breath as the pitch." (...) "There’s a meme going on Pinterest that I believe sums up this moment: “We wanted robots to clean the dishes and do our laundry, so we could draw pictures and write stories. Instead they gave us robots to draw pictures and write stories, so we could clean dishes and do laundry.”" (...) "I would stop short of saying that the existence of genAI tools within the company is directly increasing the per capita workload, but an argument could be made of it indirectly accomplishing that. The net result is not a lightening of the load as has been so often promised."
I don't really mind doing the laundry, though I wish I could spend more time drawing and painting. But nothing compares to how much I do mind AI slop; because it's not just aesthetically displeasing, and it is offensively off-putting, but it's also a time waster in so many ways, and I value my time.
But the thing I so often come back to is that it seems like it's turning out that even with what little automation is being introduced with the AI bonanza, it's the same old story. They said they automated secretaries and administrative assistants, but that's not really what happened, I know because I've worked in jobs where I had to do work that I had done in the past as a clerk or an administrative assistant or secretary, but there I was, I then had to do it as a professional as an extra in my job, because the employer refused to have enough clerical assistance or because of funding structures. That's a big part of the story about automation, that it seems to not be about robots actually doing the jobs, not actually replacing the jobs, but tech tools that increasingly allow employers to reduce job positions by shifting more arduous work onto people not as well equipped to do it, and having to do it while also doing what's in their actual expertise. I’ve seen ads recently promoting the idea to employers that employees expected to travel on business should be doing their own travel bookings, which is incredibly frustrating and time consuming work to pile on top of someone with a tight schedule, expertise in something else, and having to travel for work!
To some extent automation always seems to be a bait and switch. The lack of robust overtime laws, and salary position manipulations being unregulated, allows for more work to be loaded onto less people, and AI LLM chatbot tools are just another log on that fire, rather than being a way to douse the problem of being overworked.
"Looking back, I wish my goal hadn't been to persuade managers but instead to organize fellow workers."
One thing is for sure, management is never going to fix the problems of laborer interests.
#labor#ai hype#workplace#jobs#employers#tech tycoons#tech hype#tech tools#robots coming for the jobs#robots coming for the wrong jobs#ai slop#brian merchant#david graeber#workers#employees#co-workers#engineers#software engineers#programmers#LLMs#automation
0 notes
Text
a garage set up | build interlude
tucked next to an old rusty tool chest, the roomies' (really mitch's) garage setup is the perfect blend of retro flair and serious tech. a refurbished PC flickers on a thrifted desk, flanked by scattered hardware, tangled cords, and a dusty fan that only works if you jiggle the chain just right. it’s a scrappy little haven for an aspiring programmer—messy, quiet, and oddly perfect...
footnote: special shout out to @atticwindowatdawn for some much needed garage inspo <3 and black mirror's uss callister
i spent better half of my mdw redecorating their 70s mid-century home in oasis springs. i'm thinking j's comes from a good family so perhaps it's real estate he inherited from his grandparents. in need of modern renovation but they've grown to love their retro spot.
this is why i haven't posted updates. too busy building the save and lore. here's some progress shots in a new series called build interlude...
-d.
583 notes
·
View notes
Text
What kind of bubble is AI?

My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes
Text
Build-A-Boyfriend Chapter III: Grand Opening



I’ve been hungover all day… also.... I'm sorry that the chapters aren't as long as people like, that's just not really my style.
->Starring: AI!AteezxAfab!Reader ->Genre: Dystopian ->CW: none
Previous Part | Next Part
Masterlist | Ateez Masterlist | Series Masterlist
Four days before the grand opening, Yn stood in the center of the lab, arms crossed, a rare smile tugging at her lips.
No anomalies.
No glitches.
Every log was clean. Every model responsive and compliant.
She tapped through the final diagnostics as her team moved like clockwork around her, prepping the remaining units for transfer. The companions were ready. Truly ready.
They’d done it. And for the first time in months, Yn allowed herself to believe it.
“They’re good to go,” she said aloud to the room, voice steadier than it had been in weeks. “Now we just make it beautiful.”
There was no dissent. No hesitation. Just quiet, collective relief.
By 6:00 a.m. on launch day, the streets surrounding Sector 1 in Hala City were already overflowing. Women of all ages lined the polished roads, executives in sleek visors, college students in chunky boots, older women with glowing canes, and mothers with daughters perched on their hips.
A massive countdown hovered above the building in glowing light particles.
Ten. Nine. Eight. Seven. Six. Five. Four. Three. Two. One
When the number hit zero, the white-gold doors of the first Build-A-Boyfriend™ store slid open, and history, quite literally, stepped forward.
The inside of the flagship store was unlike anything anyone had seen, not in a simulation, not in VR, not even on the upper stream feeds.
It was clean but not cold, glowing with soft light that pulsed in time with ambient sound. Curved architecture, plants that weren’t quite real, air that smelled like skin and something floral underneath.
The crowd entered in waves, ushered by gentle AI voices projected from the ceiling:
“Welcome to Build-A-Boyfriend™, KQ Inc.’s most advanced consumer product to date. Please scan your wristband to begin. You are in complete control.”
Light pulsed with ambient music. The air carried soft notes of citrus and lavender. Walls curved inward like a safe embrace. It felt not like a store, but a sanctuary.
Just inside, a small platform rose, and the crowd hushed.
Standing atop it in a graphite suit that shimmered subtly with light-reactive tech, Vira Yun took the stage.
Her presence silenced everything. Not with fear. With awe.
She didn’t need a mic. The air itself amplified her words.
“Welcome, citizens of Hala City, and beyond. Today is not just a milestone for KQ Inc. It is a milestone for all of us, for womanhood, for autonomy, for intimacy on our terms. For centuries, we’ve been told to settle. To accept love as luck, not design. To believe that affection must be earned, that tenderness is a privilege, not a right. That era is over. Here, you are not asking. You are choosing. Each companion created within these walls is not simply a machine, but a mirror, one that reflects your needs, your softness, your strength, your fantasies, your fears. And we have given you the tools to shape that reflection without shame. This store is not about dependency. It’s about design. About saying: I know what I want, and I deserve to receive it, safely, sweetly, and with reverence. Let the world call it strange. Let them call it artificial. Because we know the truth: every human deserves to feel adored. And today, we’ve made that reality programmable.”
"Thank you. And welcome to Build-A-Boyfriend.”
From the observation deck, Yn stood quietly, tablet in hand, watching the dream unfold. She’d spent months writing code, assembling microprocessors, mapping facial expressions, and optimizing human simulation algorithms.
Now it was real. Now they were here, and it was working.
One of the first customers to walk in was 31-year-old office worker, Choi Yunji
She stepped forward, clutching her wristband like it might slip from her fingers. She’d told herself she was just coming to look. Just curious. Just research. But now that she was inside, face-to-face with a glowing interface, it felt more like a confession.
“Would you like an assistant, or would you prefer to design solo?” a soft voice asked beside her. Yunji turned. A young woman with slicked-back hair and a serene face smiled at her. The name tag read: Delin, Companion Consultant.
“I… I think I need help,” Yunji said.
“Of course,” Delin said warmly. “Let’s begin your experience.”
Station One: Body Frame
A holographic model appeared before them, neutral, faceless, softly breathing.
“Preferred height?”
“Taller than me. But not too much. I want to feel safe. Not… overpowered.”
“Understood.” Delin adjusted sliders with a flick of her fingers. The form shifted accordingly.
“Shoulders wider?” “Yes.” “Musculature?” “Athletic, not bulky.” “Skin tone?” “Honey-toned.”
Station Two: Facial Features
“I want kind eyes,” Yunji said. “And maybe a crooked smile. Something… imperfect.” “We can simulate asymmetry.” “What about moles?” “Placed to your liking.”
Station Three: Hair
“Longish. A little messy. Chestnut.” “Frizz simulation or polished strands?” “Frizz. I don’t want him looking like he came out of a factory.” Delin smiled. “Ironically, he did.”
Station Four: Personality Matrix
Yunji froze. The options felt too intimate.
“Start with a base? Empathetic, loyal, gentle, observant…” “Can I choose traits… I didn’t get to have before?” “Yes,” Delin said simply.
They adjusted levels: affection, boundaries, humor, attentiveness. A slider labeled “Emotional Recall Sensitivity” blinked softly.
“What’s this?”
“How deeply your companion internalizes memories related to you. It allows for more dynamic emotional bonding.” Yunji slid it to 80%.
Station Five: Wardrobe
“Something soft. Comfortable. Approachable.”
A cozy cardigan wrapped over a white tee. He looked like someone who would bring you tea without asking.
“Would you like to name your companion?” “…Call him Jaeyun.”
Her wristband lit up:
MODEL 9817-JAEYUN Estimated delivery: 3 hours Ownership rights granted to: C. Yunji
Yunji turned slowly, as if waking from a dream. Around her, other women embraced, laughed, shook — giddy or stunned. This was more than shopping. This was the return of the forbidden.
Around the Room
A pair of teenage twins argued over whether their boyfriends should look identical or opposites. A woman on her lunch break ordered her unit for home delivery with a bedtime story feature. Friends joked about setting up double dates and game nights with their new companions.
One customer squealed, “I’m going full fantasy, tall, sharp jaw, high cheekbones, and a scar over the brow. I want him to look like he’s been through something.” Her friend “Big eyes, soft lips, librarian vibes. Another “I want dramatic jealousy in a soft voice. Like poetry with teeth.”
The store pulsed with joy, wonder, and something deeper. Yn felt it in her chest, pride and awe, washing over the logic-driven part of her that rarely gave way. She had helped build this future.
As the lavender glow settled over the quieting store, Yn remained in the observation wing, reviewing data. The launch had exceeded all projections.
She didn’t hear the door slide open behind her.
“Stunning, isn’t it?”
Vira stepped in, elegant as ever in graphite, her braid flawless, her voice smooth.
Yn straightened. “Yes, ma’am. It’s surreal.”
“We did it. You did it,” Vira said, standing beside her. “Revenue exceeded estimates by 37%. But more than that… I saw joy out there. Curiosity. Potential.”
Yn nodded. “The models held up. All systems within spec.”
“Good. Because in six days… we go even bigger.”
Yura turned. “The Ateez Line.”
Vira’s smile sharpened.
“Exactly. Eight elite units. Eight dreams. Fully interactive. Custom-coded. The most lifelike AI we’ve ever built. You’ve done beautiful work, Yn. Let’s make history again next week.”
She left as smoothly as she arrived. Yn exhaled, her fingers tightening around her tablet.
Six days.
Just six days.
Taglist: @e3ellie @yoongisgirl69 @jonghoslilstar @sugakooie @atztrsr
@honsans-atiny-24 @life-is-a-game-of-thrones @atzlordz @melanated-writersblock @hwasbabygirl
@sunnysidesins @felixs-voice-makes-me-wanna @seonghwaswifeuuuu @lezleeferguson-120 @mentalnerdgasms
@violatedvibrators @krystalcat @lover-ofallthingspretty @gigikubolong29 @peachmarien
@halloweenbyphoebebridgers
If you would like to be a part of the taglist please fill out this form
#ateez fanfic#ateez x reader#ateez park seonghwa#ateez jeong yunho#ateez yeosang#ateez choi jongho#ateez kim hongjoong#ateez song mingi#ateez mingi#ateez#hongjoong ateez#hongjoong x reader#kim hongjoong#seonghwa x reader#park seonghwa#yunho fanfic#yunho#ateez yunho#jeong yunho#yunho x reader#wooyoung x reader#wooyoung#jung wooyoung#ateez jung wooyoung#kang yeosang#yeosang#yeosang x reader#jongho#choi jongho#choi san
78 notes
·
View notes
Note
In what way does alt text serve as an accessibility tool for blind people? Do you use text to speech? I'm having trouble imagining that. I suppose I'm in general not understanding how a blind person might use Tumblr, but I'm particularly interested in the function of alt text.
In short, yes. We use text to speech (among other access technology like braille displays) very frequently to navigate online spaces. Text to speech software specifically designed for blind people are called screen readers, and when use on computers, they enable us to navigate the entire interface using the keyboard instead of the mouse And hear everything on screen, as long as those things are accessible. The same applies for touchscreens on smart phones and tablets, just instead of using keyboard commands, it alters the way touch affect the screen so we hear what we touch before anything actually gets activated. That part is hard to explain via text, but you should be able to find many videos online of blind people demonstrating how they use their phones.
As you may be able to guess, images are not exactly going to be accessible for text to speech software. Blindness screen readers are getting better and better at incorporating OCR (optical character recognition) software to help pick up text in images, and rudimentary AI driven Image descriptions, but they are still nowhere near enough for us to get an accurate understanding of what is in an image the majority of the time without a human made description.
Now I’m not exactly a programmer so the terminology I use might get kind of wonky here, but when you use the alt text feature, the text you write as an image description effectively gets sort of embedded onto the image itself. That way, when a screen reader lands on that image, Instead of having to employ artificial intelligences to make mediocre guesses, it will read out exactly the text you wrote in the alt text section.
Not only that, but the majority of blind people are not completely blind, and usually still have at least some amount of residual vision. So there are many blind people who may not have access to a screen reader, but who may struggle to visually interpret what is in an image without being able to click the alt text button and read a description. Plus, it benefits folks with visual processing disorders as well, where their visual acuity might be fine, but their brain’s ability to interpret what they are seeing is not. Being able to click the alt text icon in the corner of an image and read a text description Can help that person better interpret what they are seeing in the image, too.
Granted, in most cases, typing out an image description in the body of the post instead of in the alt text section often works just as well, so that is also an option. But there are many other posts in my image descriptions tag that go over the pros and cons of that, so I won’t digress into it here.
Utilizing alt text or any kind of image description on all of your social media posts that contain images is single-handedly one of the simplest and most effective things you can do to directly help blind people, even if you don’t know any blind people, and even if you think no blind people would be following you. There are more of us than you might think, and we have just as many varied interests and hobbies and beliefs as everyone else, so where there are people, there will also be blind people. We don’t only hang out in spaces to talk exclusively about blindness, we also hang out in fashion Facebook groups and tech subreddits and political Twitter hashtags and gaming related discord servers and on and on and on. Even if you don’t think a blind person would follow you, You can’t know that for sure, and adding image descriptions is one of the most effective ways to accommodate us even if you don’t know we’re there.
I hope this helps give you a clearer understanding of just how important alt text and image descriptions as a whole are for blind accessibility, and how we make use of those tools when they are available.
391 notes
·
View notes
Quote
Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer. In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist. “We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.” More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information. The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why. Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful - The New York Times
Model Collapse, Ladies and Gentlemen
22 notes
·
View notes
Text
Murderbot September Day 3: Augments
“Augments” specifically refer to implanted, integrated, artificial parts with feed access and bio-feedback. An ordinary prosthetic isn’t an augment.
Three hundred years ago, augmentation was relatively rare. It was overwhelmingly done for specific careers, such as spaceship pilots, doctors, lawyers, and high-powered financial professionals—people who needed to process and juggle a lot of information quickly. This was when the ancestors of Preservation took off on the Pressy and were out of contact with the rest of the galaxy for two hundred years.
By the time they awoke from cryo-sleep, augmentation had become a LOT more common. Ordinary businesspeople were getting augmented as kind of a status symbol—saying, my job is so important, so fast-paced, so high-profile, that I need augments to keep up with it. I’m Important. It also became increasingly done by coders, programmers, tech professionals, in order to remain competitive in tech jobs. Brain-feed augmentation for children as young as 11 or 12 became increasingly common—the younger you get brain augments, the more neuroplastic you are at that point, the easier the integration is and the more natural it feels to connect to the feed. This is done both by parents wanting their kids to have an advantage getting high-profile jobs, but can also be sponsored by companies in exchange for promising the kid to be contracted to work for them for a certain number of years when they come of age. Sometimes, this is considered a really good deal.
The people of Preservation were incredibly offput by this.
Even now, nearly a hundred years later, augments are seen as a “corporate thing” on Preservation. Few people are augmented, and they’re still mostly doctors and pilots, and augmentation of children is illegal unless it’s for a legitimate medical cause (certain types of augments were developed for treating things like brain injuries or seizure disorders, for example.) As such, it’s rare but not unheard of to see augmented humans on Preservation. It still… tends to get a knee-jerk emotional reaction of “that’s a vain/barbaric Corporate thing,” though.
Pin-Lee has considered getting augmented before. By this point, you almost never encounter Corporation Rim lawyers who aren’t. When she’s negotiating interplanetary contracts, it’s a valuable tool to get on their level. And lawyers who aren’t augmented are kind of seen as naïve newbies punching about their weight class. But she hasn’t, partially out of 1) legitimate medical concerns that augments could interact poorly with her medication, and 2) pride and spite that she doesn’t have to make herself Like Them to be just as good at her job as them.
#The Murderbot Diaries#if you see an augmented human on Preservation there is a 90% chance they are a doctor a spaceship pilot or an immigrant#The other 10% is usually medical concerns#Getting augmented just cause you feel like can be done but is rare because it usually involves invasive brain surgery#And oh people get Judgy about it#MBS24#Murderbot September
63 notes
·
View notes
Text
A Beginner's Guide to Learning Cybersecurity
I created this post for the Studyblr Masterpost Jam, check out the tag for more cool masterposts from folks in the studyblr community!
(Side note: this post is aimed towards the technical side of security, rather than the governance/management side, because the tech stuff is what I'm familiar with.)
Where do I start?
Cybersecurity is a specialization of general tech & therefore builds on some concepts that you'll need to know before you can dive deep into security. It's good to have a background in and understand:
how computers & operating systems work
how to use Linux
computer networking & basic protocols
If you're serious about learning cybersecurity, it can be helpful to look at certifications. Even if you don't want to get certified or take the exam (they can get expensive), they provide you with a list of topics that you can use to guide your self-study. And if you want to find a job, a certification is practically required for getting your foot in the door.
I personally recommend the CompTIA series of certifications, because they're well-recognized and I think they expose you to a good breadth and depth of material to get you started. Start with the A+ certification if you have zero tech background. Start with the Network+ certification if you've never taken a networking course. Once you get your basic computer and networking knowledge down, then you can jump into security. The Security+ is a good starting point.
Do I need to know how to code?
No, but it would be really really helpful. You don't have to be a skilled software engineer, but understanding the basics and being able to write small scripts will give you a solid foundation.
From Daniel Miessler's post How to Build a Cybersecurity Career:
You can get a job without being a programmer. You can even get a good job. And you can even get promoted to management. But you won’t ever hit the elite levels of infosec if you cannot build things. Websites. Tools. Proofs of concept. Etc. If you can’t code, you’ll always be dependent on those who can.
How do I gain skills?
Play Capture the Flag (CTF) games.
Stay up to date with security news via an RSS reader, podcasts, or whatever works for you. Research terms that you're unfamiliar with.
Watch conference talks that get uploaded to YouTube.
Spin up a VM to practice working with tools and experiment on your own computer.
There are lots of brilliant, generous people in cybersecurity who share their knowledge and advice for free. Find their blogs, podcasts, and YouTube channels. Look for local meetups in your area.
I'm still relatively new to the field, but I have a general knowledge of lots of different things, so feel free to send me an ask and I can probably help point you to some resources. We're all in this together!
Previous Cybersecurity Masterposts
An Introduction to Cybersecurity
Cybersecurity Book Masterpost
Free Cybersecurity Learning Resources Masterpost
Masterpost of Study Tips for Cybersecurity
Cybersecurity Tools Masterpost
Thank you so much to everyone who participated in the #StudyblrMasterpostJam this week! It was wonderful to see what other studyblr folks are passionate about. The jam technically ends today but there are no official rules, so if you've been thinking about writing a masterpost, this is your sign!
32 notes
·
View notes
Text
At around 5 pm on a Thursday in December 2022, a privacy- and information-freedom-focused programmer named Micah Lee learned, to his shock, that he had just been banned from Twitter. His crime: posting a link to @Elonjets, an account on the competing social media service Mastodon that tracked the location of the private jet of Twitter’s new billionaire owner, Elon Musk—a link that Musk would later claim amounted to “doxing” despite the jet’s location info being publicly available.
For a moment, Lee grieved the loss of an account he'd spent years building, with more than 50,000 followers. Then, almost immediately, that feeling was replaced with relief to have escaped a platform he felt was already in precipitous moral decline. Since Musk had taken it over two months earlier, Twitter's new owner had already allowed previously banned far-right and even neo-Nazi figures back onto the service in the name of free speech—while simultaneously axing the accounts of leftists. Perhaps getting banned for offending the mercurial mogul behind those partisan decisions was “a good way to go,” Lee decided.
He hasn't looked back. Twitter eventually told Lee he could return to the service if he deleted his @Elonjets tweet. Instead, he stayed off the platform for eight months before finally deleting that post, but only so that he could log in and delete his entire history on the platform. A few months later, after Twitter had become X, he wrote a few messages promoting a book he'd written—all now deleted, too—and says he has barely touch the service otherwise. “Honestly, my mental health is much better since then,” he adds.
Now, Lee wants to help you achieve that same cleansing release. Today, he launched Cyd—an acronym for “Claw back Your Data”—a desktop application designed to give users more control over their X history: archiving it, trimming it to their preferences, or destroying it altogether. In the free version of Cyd, the program allows anyone to download their X posts—Cyd can save up to 2,000 of your most recent posts itself, or you can use X's built-in feature that allows you to download your entire archive—and then automatically delete them. For $36 a year, users can access Cyd's premium features, like erasing the contents of their account with more fine-grained filters based on variables like date, number of likes or retweets, or keywords, un-retweeting or removing likes from posts en masse, and unfollowing all X users.
While Cyd for now is designed specifically for managing—or emptying out—your X account, Lee says he hopes to eventually add other features for carrying out the same archiving and deletion functions on services like Facebook and Reddit. “A handful of billionaires like Elon Musk and Mark Zuckerberg and Jeff Bezos control all the platforms that we use all the time and where we have all our data,” Lee says. “I want to basically make it so that the users of these platforms—everybody else who isn't one of these really rich tech billionaires—has a bit more power.”
Lee is launching Cyd in the midst of a mass X exodus: Millions of users have fled the service in the wake of the election of Donald Trump, many turned off by Elon Musk's support for Trump in the form of both campaign contributions and in his own nonstop political posts, as well as the increasingly conservative bent of X's remaining user base. The social media Bluesky, in particular, appears to have gained most of those X refugees, adding at least 8 million users in the weeks after the election.
“I think it's incredibly timely for people to delete as much as they can, as much as they want to, off of X, and just stop using the platform,” Lee says, “and try to take some of its influence away and move it to better platforms like Bluesky or Mastodon.”
Cyd is only one of several apps like TweetDeleter and Redact.dev that offer to archive and prune your X account. But unlike TweetDeleter, Cyd allows mass deletion of posts in its free version. While Redact.dev offers more free deletion features than Cyd, Cyd's $36 paid version is about half the price of Redact's $6 a month premium plan. It also offers some features that Redact doesn't, like deleting posts based on number of likes or retweets, so that users can, for instance, scrub all but their most viral posts.
Despite Cyd's timeliness, Lee says he's been working on a social media archiving and deletion tool for years, even before Musk acquired Twitter. In 2019, he released Semiphemeral, an earlier app with social media management features similar to Cyd, partially in response to his own experience of becoming the target of harassment and antisemitic tweets in the previous years. To perform its social media archiving and curation, Semiphemeral tapped into Twitter's API, an interface to its backend data provided to developers.
A few months after Musk acquired Twitter, however—and after the site summarily banned Lee and a handful of other journalists for linking to @Elonjets—Twitter ended free access to that API, killing Semiphemeral. A year later, however, Lee was laid off from his job as the director of information security at the news outlet the Intercept, along with about a third of the newsroom, and started looking for projects to work on. He soon figured out that he could revive Semiphemeral by designing it to automatically view and scrape X pages from a web browser instead of using X's API. “Everything it does is actually something you can do yourself in a web browser,” Lee says. “It would just take you a lot more time.” He also rebranded the app to Cyd in an attempt to make its name easier to remember and spell, and created a cute blue bird in a pirate mask (also named Cyd) to serve as its animated logo.
Though it's cheaper than some other options, Cyd's annual cost of $36 for its premium features will no doubt still be too steep for many users. Lee says he plans to work with organizations that represent at-risk groups to offer the tool for free to those who need it. He also points out, perhaps only half-jokingly, that $36 is less than half the annual cost of a Twitter blue account.
More to the point, he says that he hopes that unlike some of his previous software projects—the secure and anonymous file-sharing tool Onionshare and the safe-download protection app Dangerzone—he hopes this one will have a business model that gives him time to work on it and expand its feature set to include other social media platforms. “I would love to make it so that it can pay for itself,” Lee says, “so that I can make it bigger.”
Despite those big ambitions, Lee says that he hadn't quite intended to launch Cyd just as Bluesky experiences a user explosion at X's expense—and as more users than ever consider the motives and power of the tech billionaires who control the platforms they use.
“I knew that X was going down the drain, and I knew that people would be pissed off about it. I didn't really quite realize that people would be quitting en masse like they are now," says Lee. “It just happens to be perfect timing.”
18 notes
·
View notes
Note
Level 2 with anyone that's from Regretevator? (Minus Pest, Folly, and Spud.)
Here you go!!
We chose stat!! :3
-Pixel
Names: Alicia, Marise, Stat, Al, Mari, Ace, Code, Byte, Pix, Tech, Blue, Sprocket, Nova, Circuit, Glitch, Echo, Data, Kit, Cyra, Logic
Pronouns: she/her, they/them, code/codes, stat/stats, gear/gears, spark/sparks, pix/pixels, digi/digits, sys/systems, tech/techs, bot/bots
Genders: female, technogirl, pixelgender, codeflux, systemfluid, glitchette, gearflux, digiandrogynous, softmechanic, techpunk, electricgender, mechfemme, screenflux
Age: 16–17
Roles: Survivalist, mechanic, programmer, tactician, protector, inventor, strategist
Likes: Machinery, programming, her computer (S.T.A.T.), survival, cats, forming bonds with like-minded individuals, puzzles, digital art, sci-fi novels, tinkering with gadgets, building defenses, creating blueprints
Dislikes: Destruction of her creations, unfamiliar environments, being misunderstood, betrayal, loud or chaotic situations, losing control, unnecessary risks, power outages, overly emotional situations
Faceclaim:


How they fulfil their roles: STAT applies her sharp intellect and technical skills to protect others and maintain order within chaotic situations. She builds tools, devises strategies, and creates solutions to keep the system functioning efficiently.
Typing quirk: Uses ALL CAPS for emphasis, includes tech-inspired symbols like & or %, and peppers sentences with programming terms.
Sign off: 💾⚙️, 🔩📡, 🖱️🔋
#baa blog#bah#build a headmate#build an alter#endo safe#endo system#endogenic#endogenic friendly#endogenic safe#willogenic
7 notes
·
View notes
Note
Happy STS! Can you talk more about the worldbuilding and details around your 'verse's fictional drugs? How do they function, or impact the culture and society? Are there weird processes around manufacturing them or other quirks unique to them?
Thanks for the ask and for giving me an excuse to ramble about this 😁
The drugs-not-drugs and other tools based on the same tech take the form of mind-altering frequencies. They're inspired by a whole heap of things including non-invasive neurotechnology processes that translate specific brainwave frequencies into audible tones, binaural beats and solfeggio frequencies, sonic weapons, and mind control experimentation programmes like MKUltra.
While those are all real, the version in the books takes a massive jump into future hypotheticals and shrunk down into tiny programmable cells worn behind the ear. These connect to the neural implant that everyone in the books' world has fitted at birth.
They can also be cast into an environment and picked up by a receiver cell worn by the people there. An fun use of this could be everyone in the club being on MDMA at the same time, a positive use could be calming frequencies cast into a hospital ward, and a dark use could be sedating prisoners or wiping and overwriting their memories after being used in human experiments (this happens to Aria in Name From Nowhere).
People can be trained to project emotions and experiences to another person via a cast aimed at one person rather than an entire environment. The can be used (good) for processing trauma and accessing repressed memories, or (shitty) for aggressive interrogations, mind control, and manipulation. Rafe in Bridge From Ashes and Name From Nowhere is an expert projector and has experience at both ends of the good-to-shitty scale.
To end on a happier note, recreational frequencies cause no harm to the mind or body, and the effect stops as soon as the cell is detached. Because you don't build up a tolerance to them they way you would to real-world drugs, it's possible to get the same strength of hit every time which reduces the likelihood of addiction fuelled by hit-chasing. Obvs some people will want to be on something all the time, but the damage limitation is still limitationing.
(I could honestly go into way more detail about how this tech fits into society, but this is getting ridiculously long already so I'm gonna shut up)
#writeblr tags#sts#storyteller saturday#answered asks#name from nowhere#bridge from ashes#writeblr#writers on tumblr
9 notes
·
View notes
Text
Today, Her Royal Highness The Princess Royal and Vice Admiral Sir Timothy Laurence arrived in Colombo, Sri Lanka.
Day 1: Colombo
🌺 At a Welcome Ceremony at Bandaranaike International Airport, Her Royal Highness was received by dignitaries including the British High Commissioner to the Democratic Socialist Republic of Sri Lanka, His Excellency Mr. Andrew Patrick and Minister of Foreign Affairs of Sri Lanka, His Excellency Mr. Ali Sabry.
🧵 The Princess’s first visited the MAS Active Factory, one of the largest apparel tech companies in South Asia to be identified by the UK Fashion and Textile Association (UKFT) as an important Sri Lankan partner.
👚 As President of the UKFT, Her Royal Highness had an opportunity to meet staff and tour the facility to hear more about their innovative designs and partnerships with UK brands.
🎂 Next, Her Royal Highness visited Save The Children Sri Lanka’s Head Office in Colombo. This year marks 50 years of Save The Children working in Sri Lanka.
💗 The Princess had an opportunity to hear about some of the programmes the charity has provided, which have contributed to humanitarian and development needs across the country, including in education, health and nutrition and vocational skills development.
👧 As Patron of Save The Children UK, Her Royal Highness unveiled a plaque commemorating the 50th Anniversary of Save The Children working in Sri Lanka.
🏥 Following this, The Princess Royal visited Lady Ridgeway Hospital for Children to see Save The Children’s Social Emotional Learning Tool Kit Programme, Tilli, in action.
📚 Her Royal Highness met hospital staff who are implanting the Tilli programme which is a play-based, Social-Emotional Learning tool kit that incorporates evidence-based interventions such as games and story-telling to assist parents and teachers in facilitating meaningful child-friendly discussions with children on topics such as trust, consent, bodies and boundaries.
🇱🇰 The Princess Royal previously visited Sri Lanka in March 1995 with Save The Children to learn more about their projects in the country.
Video from Royal Family Instagram story | Posted 10th January 2024
65 notes
·
View notes
Text
The Princess Royal and Vice Admiral Sir Timothy Laurence arrived in Colombo, Sri Lanka.

Day 1: Colombo
At a Welcome Ceremony at Bandaranaike International Airport, Her Royal Highness was received by dignitaries including the British High Commissioner to the Democratic Socialist Republic of Sri Lanka, His Excellency Mr. Andrew Patrick and Minister of Foreign Affairs of Sri Lanka, His Excellency Mr. Ali Sabry.

The Princess’s first visited the MAS Active Factory, one of the largest apparel tech companies in South Asia to be identified by the UK Fashion and Textile Association (UKFT) as an important Sri Lankan partner.

As President of the UKFT, Her Royal Highness had an opportunity to meet staff and tour the facility to hear more about their innovative designs and partnerships with UK brands.

Next, Her Royal Highness visited Save The Children Sri Lanka’s Head Office in Colombo. This year marks 50 years of Save The Children working in Sri Lanka.

The Princess had an opportunity to hear about some of the programmes the charity has provided, which have contributed to humanitarian and development needs across the country, including in education, health and nutrition and vocational skills development.

As Patron of Save The Children UK, Her Royal Highness unveiled a plaque commemorating the 50th Anniversary of Save The Children working in Sri Lanka.
Following this, The Princess Royal visited Lady Ridgeway Hospital for Children to see Save The Children’s Social Emotional Learning Tool Kit Programme, Tilli, in action.

Her Royal Highness met hospital staff who are implanting the Tilli programme which is a play-based, Social-Emotional Learning tool kit that incorporates evidence-based interventions such as games and story-telling to assist parents and teachers in facilitating meaningful child-friendly discussions with children on topics such as trust, consent, bodies and boundaries.

The Princess Royal previously visited Sri Lanka in March 1995 with Save The Children to learn more about their projects in the country.

The Princess Royal delivered a message from The King to the President and First Lady of Sri Lanka this evening.
© Royal UK
#BUSY PRINCESS#princess anne#princess royal#tim laurence#timothy laurence#workanne#brf#british royal family
63 notes
·
View notes
Text
world’s most powerful figures. These children were found in sprawling underground facilities beneath the U.S. border—high-tech hubs equipped with medical labs, holding cells, and advanced transport systems. One base, hidden 100 feet below the desert, was camouflaged to evade detection and revealed horrors beyond belief: cryogenic storage units, evidence of genetic experiments, and classified technology used for inhumane tests.
The Elite’s Pyramid of Power
Testimonies from the rescued children expose a system of trafficking that touches every layer of society. Documents revealed lists of high-profile clients, including politicians, CEOs, and entertainers, all participating in these heinous acts. Major banks and corporations facilitated these transactions, hiding the money trails in plain sight. Even more damning, blackmail operations ensured the silence and compliance of these elites—photos and recordings used as leverage to enforce their global agenda.
Big Pharma and Silicon Valley: Silent Partners in Evil
Stockpiles of experimental drugs were found in these facilities, linking Big Pharma to the sedation and manipulation of children. Tech giants were also implicated, with advanced tools used to protect traffickers and censor whistleblowers. These companies provided both the means and the cover to allow this network to thrive.
Military Whistleblowers Expose the Agenda
Insiders have revealed that these trafficking operations were tied to secretive military programs. Children were subjected to trauma-based mind control experiments, aimed at creating programmable individuals for elite control. These black-budget projects, funded by trafficking profits, sought to cement a new form of social dominance.
Trump’s Fearless Leadership
Trump’s decisive actions have dismantled a network that past administrations ignored or enabled. His return marks a turning point, proving he is the leader the elites feared. Military insiders and trusted allies have prepared for this operation for years, and Trump’s boldness has struck a fatal blow against the deep state.
The Fight is Just Beginning
This victory is monumental, but the war against the elite machine rages on. The evidence is undeniable, and the patriots are rising. With Trump leading the charge, the tide has turned—but the question remains: Are you ready to stand and fight for the truth? The time is now. Victory is ours.
🏆🏆🏆
Join and share this channel immediately⬇️
Trump Supporters ✅️
Trump Supporters ✅️
#DJT#truth#justice#crimes against humanity#the hunters become hunted#elites#clowns#puppets#these people are evil#nothing can stop what’s coming#speak up#standup#stay strong#be united#be safe#please share#wwg1wga
9 notes
·
View notes
Text
B-1 Refusals.
The Rockwell B-1 Lancer is a supersonic variable-sweep wing, heavy bomber used by the United States Air Force. It has been nicknamed the "Bone" (from "B-One"). As of 2024, it is one of the United States Air Force's three strategic bombers, along with the B-2 Spirit and the B-52 Stratofortress. Its 75,000-pound (34,000 kg) payload is the heaviest of any U.S. bomber.
And I literally refused to continue repairing them; which made an impact on Congress apparently....
My military career was focused on repairing each of these Bomber which is valued at more than one-billion dollars; that was before six-years of inflation.
Sitting here and piecing together my personal history up to this point; I think that it was *more* important to my own personal safety to *stop* working on these things.
I've written at length about my own intelligence, my skill as an Aircraft Mechanic and Diagnostician, the value these aircraft bring to the U.S, the hurdles I faced as an enlisted mechanic, PTSD relating to certain events that these craft were involved in, and previously; Exactly why I was underpaid and overwhelmed in the performance of my duties.
I refused to go back to the B-1s... Not once, but three times in the last few years of my career when I requested a special duty in education and training material development.
I keep hitting on this idea that "They let me be a computer programmer" as the primary reason for accepting that special duty.
I'm not certain that's the whole truth anymore.
I hate singing my own praises. I don't like the concept of self-promotion because it feels like I'm trying to force people to believe my own crap.
I fairly certain that my past success, while interspersed with professional failure ... Would have gotten me killed.
Going back to fix more B-1s especially in a deployment environment would have gotten me Targeted with a threat that your average enlisted person just doesn't get... especially not a random Staff Sergeant.
There have been reports of High Value targets before.... One particular report of an Army Sergeant responsible for a few deaths being targeted by foreign terrorist organizations rings in my mind.
And that's why I think my personal "spidey-sense" would not stop going off for such a long time.
Because if I was a High-Value target; undervalued by the Air Force....
Well what's the risk/reward chart look like for that?
Low-Risk/High-Value target; Holy shit... That's me.
A lot of what I have to go on is unverified.
If I had been taken out; that would have literally had such a huge impact on the mission; I can't calculate it.
I couldn't accept working on the B-1 again... Not after the Air Force gave me the literal tools to calculate my worth as a target for the enemy.
And what's worse; I couldn't figure out exactly what the Air Force Machine as a whole was thinking; that they, not only wouldn't warn me directly; they purposefully devalued me.
On top of the mission and operational PTSD I was dealing with; I think I know exactly where my Burnout came from.
When I enlisted... I didn't trust the military; but I understood the reasoning and value behind certain decisions. Even if they weren't in my own best interest. After making it through basic and tech training; I trusted those decisions more.
But at that point at the end of my enlistment... I could no longer understand the reasoning or justifications around *me* specifically.
Especially not after proving myself in an entirely separate career field I was "never trained" to do. (By the AF)
I was a 7-level... As Craftsman, according to the AF; in at least two completely separate career fields; and that didn't matter.
What the hell am I supposed to do then?
What the hell is going on?
Am I literally just unable to associate with people at my own level who are responsible for vouching for me? Or is it something else ...
What else could that be? I'm a master Diagnostician in both Aircraft and Software .... I should be able to figure it out, right?
The correct answer, I think; Was to Quit.
"Quitters never Prosper" they say... And they *weren't* wrong; it's been a kind of hell ever since.
But I cannot guarantee the kind of Hell I would have otherwise faced or not faced had I stayed in.
And everything pointed to "Stay away from the B-1s"
My gut feeling; the last bastion would not shut-up about it. It wasn't *only* that I wanted to do what I wanted to.
I would have done it earlier if that were the case.
It was something bigger. Deeper, more than I was even allowed to perceive at the time.
My Gut feeling; Stay Away, Danger, that Spidey-Sense that just wouldn't quiet down.
Something deeply wrong that I can't quantify has or will happen; and the Air Force doesn't seem to know what to do about it either.
With everything that's happened so far; I can't say it was the wrong choice yet. Just that I don't know.
9 notes
·
View notes