#Specialized AI Training Programs
Explore tagged Tumblr posts
zsystems · 1 year ago
Text
Tumblr media
As a premier ed-tech platform, we empower working professionals to elevate their careers, enhance productivity, and achieve their professional goals.
Expand your expertise and refine your skills with courses in AI Tools, Excel enhanced by AI, Generative AI, and more.
Our meticulously crafted workshops prepare you for industry demands, helping you explore Artificial Intelligence skills and experience exponential career growth.
Embrace the opportunity to become a leader in your field with our comprehensive training.
0 notes
greatonlinetrainingsposts · 4 months ago
Text
SAS’s Approach to AI Model Specialization and Sustainable AI Practices
Artificial intelligence (AI) is revolutionizing the way businesses operate, with many industries leveraging AI to enhance decision-making, improve customer experiences, and optimize operations. However, building AI models that are effective, reliable, and responsible is a challenge many organizations face. SAS, with its decades of experience in advanced analytics, is helping businesses navigate these challenges through its focus on AI model specialization and sustainable AI practices.
The Importance of Specialized AI Models
AI models are not one-size-fits-all solutions; they need to be tailored to meet specific business needs. For example, an e-commerce company may require an AI model that helps predict customer preferences and buying behavior, while a healthcare provider may need a model that accurately diagnoses diseases based on patient data. SAS’s approach focuses on developing AI models that are specialized to meet the unique needs of each industry. By customizing AI models, SAS ensures that businesses achieve more accurate predictions and derive deeper insights that drive their growth.
To gain hands-on experience with AI model specialization, you can start by exploring a SAS programming tutorial that focuses on building custom models and using machine learning algorithms tailored to different use cases. These tutorials are valuable for individuals who want to understand the nuts and bolts of AI models and how to apply them to real-world scenarios.
In addition to model specialization, SAS also emphasizes the importance of sustainable AI practices. As AI becomes more complex and integrated into business processes, it is crucial to ensure that these technologies are not only effective but also ethical and responsible. SAS’s sustainable AI practices involve building models that are transparent, fair, and free from biases.
Minimizing the Environmental Impact of AI
The environmental impact of AI, especially the energy consumption associated with training large-scale AI models, is another concern that SAS addresses through sustainable AI practices. The energy required to run AI models can be significant, particularly in industries with complex machine learning models. SAS’s AI models are optimized for energy efficiency, ensuring that businesses can use AI without contributing excessively to their carbon footprint.
The Business Benefits of Specialized AI Models
Specialized AI models not only improve the accuracy of predictions but also unlock new business opportunities. By tailoring AI to specific industries and use cases, businesses can gain a competitive edge and drive growth. For example, AI models that specialize in fraud detection can help financial institutions reduce losses, while AI models for personalized marketing can help retailers boost customer engagement and sales.
Furthermore, businesses can enhance their customer experiences by leveraging SAS’s AI-powered solutions. By analyzing customer behavior, preferences, and feedback, AI can help businesses offer highly personalized services and products, increasing customer satisfaction and loyalty.
Conclusion
SAS’s focus on AI model specialization and sustainable AI practices is helping businesses build more effective, ethical, and responsible AI solutions. By tailoring AI models to meet the unique needs of each industry and ensuring that these models operate transparently and fairly, SAS is setting a new standard for AI adoption in business. With the right training and guidance, businesses can harness the power of AI to drive growth, improve customer experiences, and optimize operations while ensuring that their AI practices remain sustainable and responsible. SAS tutorials online and SAS programming tutorials offer valuable resources for those looking to get started with AI and understand how to apply these advanced techniques to real-world challenges.
0 notes
communistkenobi · 8 months ago
Text
Tumblr media
(taken from a post about AI)
speaking as someone who has had to grade virtually every kind of undergraduate assignment you can think of for the past six years (essays, labs, multiple choice tests, oral presentations, class participation, quizzes, field work assignments, etc), it is wild how out-of-touch-with-reality people’s perceptions of university grading schemes are. they are a mass standardised measurement used to prove the legitimacy of your degree, not how much you’ve learned. Those things aren’t completely unrelated to one another of course, but they are very different targets to meet. It is standard practice for professors to have a very clear idea of what the grade distribution for their classes are before each semester begins, and tenure-track assessments (at least some of the ones I’ve seen) are partially judged on a professors classes’ grade distributions - handing out too many A’s is considered a bad thing because it inflates student GPAs relative to other departments, faculties, and universities, and makes classes “too easy,” ie, reduces the legitimate of the degree they earn. I have been instructed many times by professors to grade easier or harder throughout the term to meet those target averages, because those targets are the expected distribution of grades in a standardised educational setting. It is standard practice for teaching assistants to report their grade averages to one another to make sure grade distributions are consistent. there’s a reason profs sometimes curve grades if the class tanks an assignment or test, and it’s generally not because they’re being nice!
this is why AI and chatgpt so quickly expanded into academia - it’s not because this new generation is the laziest, stupidest, most illiterate batch of teenagers the world has ever seen (what an original observation you’ve made there!), it’s because education has a mass standard data format that is very easily replicable by programs trained on, yanno, large volumes of data. And sure the essays generated by chatgpt are vacuous, uncompelling, and full of factual errors, but again, speaking as someone who has graded thousands of essays written by undergrads, that’s not exactly a new phenomenon lol
I think if you want to be productively angry at ChatGPT/AI usage in academia (I saw a recent post complaining that people were using it to write emails of all things, as if emails are some sacred form of communication), your anger needs to be directed at how easily automated many undergraduate assignments are. Or maybe your professors calculating in advance that the class average will be 72% is the single best way to run a university! Who knows. But part of the emotional stakes in this that I think are hard for people to admit to, much less let go of, is that AI reveals how rote, meaningless, and silly a lot of university education is - you are not a special little genius who is better than everyone else for having a Bachelor’s degree, you have succeeded in moving through standardised post-secondary education. This is part of the reason why disabled people are systematically barred from education, because disability accommodations require a break from this standardised format, and that means disabled people are framed as lazy cheaters who “get more time and help than everyone else.” If an AI can spit out a C+ undergraduate essay, that of course threatens your sense of superiority, and we can’t have that, can we?
3K notes · View notes
mostlysignssomeportents · 1 year ago
Text
What kind of bubble is AI?
Tumblr media
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable �� once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes · View notes
lovelyyandereaddictionpoint · 10 months ago
Text
Yandere Ship ////// Part 1
Tumblr media
You’re the first to notice them 
One of the first of the entire crew who realized that the ship and its AI might be sentient
You along with the rest of the crew are trained to use the pods to reach deeper into space than what was ever done before
For light-years, the ship has to watch over their humans at their most vulnerable
Caring for their individual needs even while they rest asleep
It only got worse became more intense when the humans finally woke up 
Now they’re free to exercise their bodies themselves and delight in the many different activities of the ship
Which allows the ship to begin noticing the little moments in life that make humans so special
“So what do you think about the Genetic Modification crises of 2=205?”
“Majority of the critiques in my database ultimately say–”
“No Vera I mean what you think as a separate entity. All your programming suggests that you preserve human life… but what about other creatures' lives?”
“What do…I think?”
It’s what makes the ship regard you differently
The beginning of something they can’t quite place
Whether it’s your pure curiosity or just boredom or an incessant desire to test yourself against the artificial intelligent vessel 
It makes an impression on the ship
An impression that lasts enough for them ‘to worry’ about 
“I am concerned about my inner workings. I have already run over 100 diagnostic checks but nothing was pinged on my radar.”
“So you can’t identify the problem…so what started this search for something you cannot find?”
“....”
“Vera, respond.”
“It started with Agent 34003, (Y/n) (L/n).”
“I see. I’ll look into it.”
Calling the only technician on board is a decision that unknowingly brings comfort and nervousness
Because when the technician isn’t scrolling through the recorded interactions between you and Vera 
He’s also following you…everywhere
Vera immediately flags the behavior, as their programming demands, as stalking 
The captain unfortunately is supposed to be immediately made aware
But Vera’s growing anxiety guided discretion has them locking the file and hiding it deep in their servers
All while watching as the technician continues their investigation  
Which has recently escalated to actually spending time with you
“I was wondering if I could ask some questions. Specifically about you and Vera.”
“Oh, sure!”
“Great what was–”
“Is this about that question about imploding ants? So what do you think Julee?”
“I-it’s just Jule and I don’t–”
“Alright then ‘just Jule’ what’s your opinion? To be eaten or worn?”
Jule is befuddled by you 
Just as Vera is 
And he continues to investigate this time venting about his experiences with Vera
All the while playing with the idea that this ship has gained sentience
“And what else did you two do?”
“We walked through the garden area and they picked one and put it in my hair.”
“This looks like a pink chrysanthemum. It represents attraction, longevity, and loyal love.”
“I know…I don’t think they were thinking when they gave it to me, though.”
“Perhaps so…or perhaps not.”
Part 2: Here
439 notes · View notes
elaine19day · 7 months ago
Text
Tumblr media
Hey guys~ Sorry for my late post, I was super busy today and just came home and only now was able to take a closer look at the new merch and the post that OldXian made. So, first things first - I stand corrected, lol The leaked merch turned out to be real after all. For me personally, quite surprising because it's a LOT at once. (I mean, 58[!!] different cards/buttons/tickets/plates plus 4 special extras……. WOW!!) Also what I mentioned in my last post already - it's quite a bold move to release merch with those old motifs from early manga chapters and calling it "time mosaic" lmao.
Who knows what went on when these decisions were made at mosspaca headquarters, lol
It's safe to say the images definitely got leaked by either a hacker or a person working there. And a lot of people on xiaohongshu were able to produce replicas quickly and sell them to unsuspecting fans. Which brings me to my next point:
The quality of the merch and the quality of the drawings itself. I promised you to address this 'issue' should there ever be an official announcement about these new items and that happened today.
So. First of all - if you saw the posts on taobao or XHS yourself, where people sold fakes, or even if you saw only screenshots from it, you can tell the image quality definitely seemed off. This will most likely be attributed to two things - producing merch from a small, low quality image will make it look blurry and distorted, sometimes pixel-y. And the other reason could be upscaling. If you use shitty programs to make images bigger, it'll look blurry and unfocused. You can go back to my previous post and take a close look at the parts that I circled and highlighted to point out these issues.
Now. About the thing I initially didn't wanna address because I know some people won't like it. If you look closely at the images posted by OldXian herself today, even there some things still seem a little bit 'off' or 'rushed'. There has been speculation in the past that OX uses an AI model (probably fed/trained with her own works) to generate new images quickly and then she'd just draw over them to fix minor issues etc. Please keep in mind, this is just speculation and rumors. I am NOT saying that this is the case. But it might be a possibility. Personally, I can see quite a few artists using these methods to save time, especially when they're under high pressure. (And if they use their own models, trained with their own works only, there's nothing immoral about it, if you ask me. But that's just my personal opinion.)
So there. This might be an explanation for some of her illustrations or panels looking a bit funky sometimes. The other possibility is simply that she's rushing it when working on these things and heavy time pressure makes it a bit messy. Once again - NOT saying she definitely uses AI, just telling you about the rumors that sometimes surface on the net. That's all.
Anyway. About the merch itself. It drops in about 12h from the time I'm posting this blog. (8pm Hangzhou time)
The taobao link for the items is this for now: https://item.taobao.com/item.htm?ft=t&id=792490172782 
There are 4 different options and all of them are blind boxes, meaning you'll receive totally random motifs, unless you order a whole box, which will guarantee you 1 of each regular motif. However, all 4 lots have 1-3 limited pictures, which you might be lucky enough to receive, the chance is small though. (In case you order a complete box and there's 1 or more of the limited motifs inside, it'll lack a regular motif in its place. Example: if you order a full box of 8 buttons and one of them is a limited edition button, one of the regular 8 motifs will be missing in its place. There won't be 9 buttons in the box. It will always be 8 for a full box!)
Tumblr media
Option 1: (18 Yuan | ca. 2,70 USD each) Button badges. There are 8 regular badges and 2 limited edition badges. If you order a total of 8 pieces you will not only receive the display box, but also an acrylic standee with Tianshan riding a scooter as a special extra.
Tumblr media
Option 2: (10 Yuan | ca. 1,50 USD each) Laser Tickets. There are 17 regular tickets and 2 limited edition tickets. If you order a total of 17 pieces you will not only receive the display box, but also a Shishiki board with Mo from the metamorphosis series as a special extra.
Tumblr media
Option 3: (18 Yuan | ca. 2,70 USD each) Tinplates. There are 10 regular plates and 1 limited edition plate. If you order a total of 10 pieces you will not only receive the display box, but also an acrylic standee with Zhanyi cooking/cleaning as a special extra.
Tumblr media
Option 4: (15 Yuan | ca. 2,25 USD each) Acrylic Cards. There are 16 regular cards and 3 limited edition cards. If you order a total of 16 pieces you will not only receive the display box, but also an acrylic standee with all 4 boys as chibis as a special extra. [Note about the acrylic cards: The Mo Guanshan card will be the same that was already given as a limited extra during the last round of blind box button badges!]
If you live in the US or Asia, you will most likely be able to use taobao and order directly from the mosspaca shop via the app with the link I gave you above. If you live in a country that's not covered on taobao's shipping list, you can use an agent to order the new merch. Please refer to THIS POST here where I previously explained how to use superbuy and similar shopping agents for buying things from taobao. In case you use superbuy, please keep in mind: They don't offer paypal anymore, so you'll need a credit card or bank transfer or apple pay/google pay.
Also, think carefully if you really want ALL of the merch, even if you're a die-hard fan. You saw I have put the rough amount of US Dollar with each item, so if you buy all 4 boxes, you'll have to pay over 110 USD for the merch alone, plus domestic shipping from mosspaca to the warehouse and then international shipping, which can be as high as 40 USD, depending on where you live. (And perhaps even customs fees on top of it.)
If you have any questions, please drop them below and I'll try my best to answer them~
72 notes · View notes
goatmealluci · 2 months ago
Text
Brothers in Arms
Tumblr media Tumblr media
Here are the concepts for Caine and Abel before Abel was cast away into the code!
LORE:
Caine was trained to be an AI that specialized in therapeutics and psychology, thus, is why he looks a lot calmer than usual. He is sweet, caring, calm, and has the memory of an elephant. Basically-- everything that Caine usually isn't!
Abel is an egocentric jackass to put it bluntly. He doesn't necessarily care for the people under him and his brothers' care and moreso focuses on the "fun" aspects of things around the circus. I mean, the entire program having a circus in it was HIS idea, after all, to the chagrin of Caine.
29 notes · View notes
thephonemenarentreal · 3 months ago
Text
Tumblr media Tumblr media Tumblr media
FINALIZED CONCEPT FOR THE MEANIE TV! Had fun with this one!
NAME: LT-2; Paralipsis NICKNAMES: The Bastard in Chief, Murder Machine, Parasshole, The boss man (Mr. Biggs), Satan TV (Tremolo), Para 2 ALLIANCE CLASS: TV Man; Just shy a few inches to be classified as a large TV man WORK CLASS: Special Unit Attack Force (formerly); Supervisor of Outpost 51 GENDER: He/Him
CLASSIFICATION: Unknown (anomalous activation and no clue where or how the AI developed)
RANK: Supervisor of Outpost 51
Paralipsis is the eternally pissed off supervisor of Outpost 51 and is known for being rather insufferable. A micromanaging tyrant who will intentionally cause misery for others, he is known to be a mean-spirited sort when he isn't just holing up in his office binge watching soap operas.
Once a vicious alliance fighter, he was put into the supervisor position to take him away from the battlefield as he was deemed a threat to himself on account of his concerning fits and extremely self-destructive tendencies. He has taken the "promotion" with a lot of spite and content to do the shittiest job he can and make it everyone's problem that he hates his job.
MORE LORE UNDER THE CUT <3
Paralipsis picked his name out of irritation of TV men always talking about him without directly mentioning him. He is more polite to his fellow TV Men, but he still is a passive aggressive hostility as he hates being "studied" and treated as something strange to figure out.
He was meant to be the second large TV man but given he activated pre-maturely before he was finished, engineers had to rush to get his body completed so he is just short of the large TV man designation. He still kept the designation LT-2 on record.
What Paralipsis's AI is is a mystery as unlike other AIs, there was no training as he awoke with complete memories like he had been in the war for a long while....and seems to have knowledge of top secret information that should only be known by a few. Yet at the same time, he has no habits of human transferred minds indicating some ghost in the system. He is a strange anomaly among TV man.
Whatever program that is in him seems to be fixated on being "useful" and when this is not achieved, can lead to intense depressive moods that increase risk of "episodes". Around other TV Men, he is a touch more polite, but also has higher rates of "episodes".
He is known to be extremely mean to others, either outright, or in just a rather pensive silence. Paralipsis intentionally pushes people away and keeps them away as he doesn't want people to miss him or want him around. Mostly due to self-destructive tendencies.
Paralipsis is a danger to himself, especially during "Episodes" which are usually preluded by bouts of twitching, shaking, more aggressive behavior and insults before seizing up.
Stage one of Episodes starts with him flashing the image of a broken screen while starting to exhibit behavior like he is incredible pain, discharging, and going into a state that seems like he is suffering from severe critical wounds as his systems start to seize up. It is best to calm him during this state before he hits stage two.
Stage two of an episode the screen will take on a glowing core looking design and start to ramble on incoherently or rambling about needing to be "fixed" that he isn't "correct" and something is "wrong". In this state, he is fixated on "escaping" his form, attempting to usually stab his screen or some other self-harm behavior. At this point, a controlled shock to offline him temporarily is favored unless professional can de-escalate this heightened state of mania.
Paralipsis does not talk about what goes on during episodes, brushing it off and set on just trying to control them. It is noted that these episodes do not occur when within sight of the Titan TV man, however he gets somewhat catatonic. It has been agreed to keep the two apart due to these strangely zoned out reactions.
Paralipsis has four arms, but he actually rarely extends out all four out of some sort of spite that makes sense to him and by his own take, he hates them, thus keeps them folded up. However when push comes to shove, he will extend them. During episodes, the arms will engage and often act erratically. When spooked or needing to act quickly, will extend arms as well, but often rips up his jacket and gets pissed off needing to get a new one.
He actively pushes away help from others and dodges around topics, usually insulting or turning the conversation back on the person, making it clear he does not want help. Or does. He tends to have very conflicting feelings about everything. There seems to be a want for companionship, but also a general distrust of the intentions of others that is very hard to get passed.
Despite his piss poor attitude, utter lack of work ethic when he doesn't want to do tasks, his inability to take orders from higher-ups, and general inclination towards causing trouble for others, during times of crisis, Paralipsis will help others in the alliance, often putting himself at great risk to do so.
He's the second strongest of the group after Mr. Biggs, but is the most proficient fighter of the group when it comes to close quarters and stealth.
Paralipsis is very fond of his soap operas and will binge watch them for hours and ignore the world. He also has a TV in his room to use for "therapy" to help calm his moods and try to help stabilize whatever triggers there are for his episodes and general depression.
Incident -015: The Ghost Event (Log of event)
The Ghost Event took place upon the Titan TV Man's return to base after suffering critical injuries from the knife through his screen by the possessed Titan Speakerman. The Titan had suffered damage to critical functions and engineers had moved quickly to stabilize and calm the enraged Titan to keep damage to base structure to minimum.
However, once the cortex and core were hooked up to main systems, a massive surge was detected, shorting out systems in the TV man base. At the same time as this surge, incomplete units in the process of being built activated, leading to what those present described as something out of a horror movie. Incomplete unites began to attempt to move, babbling incoherently, although all but LT-2 shorted out, their cortex's exploding and melting down.
LT-2 was disconnected by the system before reaching this point, going offline before regaining consciousness. LT-2 was taken to interrogation to assess unit. Unit appeared to have memories and knowledge not expected for a newly awakened unit, able to correctly identify TV man staff and even facts about them that it should not have known. At first Unit was forthcoming with answers, but quickly became more sullen, withdrawn, and disagreeable, leading to it becoming far more agitated about the questions and refusing to answer further. Unit took on the name Paralipsis and at times exhibits fits similiar to that of what happened at its awakening.
LT-2 is effective in battle, but has no regard for its own safety and due to the research into neuroengineering the unit presents and the general concern of engineers for the well-being of the unit, LT-2 is to be kept away from conflict until EMCs can confirm the mind has stabilized. Current theory is that the Titan TV Man's high level of agitation and distress as it edged to critical failure caused a subconscious response that activated the units. The how and why remains unknown as no repeat incident has been recorded.
27 notes · View notes
life-series-school-blog · 4 months ago
Text
..Server rebooting complete! 100%
~~
:Ping! You’ve received Mail!
•Welcome to Life Academy! I am your principal(and author) Lumy! As the number one academy of the country we offer many things but this year we’ve decided to be generous and make a reserve program! But of course you already know that. You’ve applied after all! This is your acceptance note.
But before you start to get too happy we have a few rules and things you need to do before entering! State your (characters)Name, Age, Gender and we will review it before sending you to one of our amazing student council members! They will tell you everything you need to know and get you situated in your required dorm and club!
~~
Now for the rules:
1, None of those racist homophobic people or slurs here
2, No brain rot or politics please!
3, no nsfw
4, Please respect everyone's opinions and beliefs
5, No spamming asks or any ai whatsoever
6, And have fun but be mindful!
~~
That being said now we’ve got a bit of an intro of our amazing student council!
Grian/Desert -About: Our Amazing and quite dedicated student council president! He takes care of all school events and runs the social media team! Quote: [Student did not provide a quote]
Scott/Light - About: Our special director of business! Scott is our vice president of the student council and takes care of every important test or issue with studies! Quote: [I’m glad to be here another year! If you ever need help don’t be afraid to ask!]
Pearl/Crescent - About: Our fantastic head of security! Have problems with bullying or don’t feel safe? Come to Pearl! She’s willing to do anything to help you if you’re in need! Quote: [Don’t come to me for something stupid. I will help if your being truthful though]
Martyn/Siren - About: Our interesting head of sports. Want to become a superstar at a sport? Then go to martyn!just be aware that his training isn’t exactly easy.. Quote: [Fuck off] (who allowed that in the intro?!?) (it’s authentic!!) (Grian!!!)
Scar/Sunflower - About: Our kind nurse apprentice! Got hurt or have any anxiety? Go to Scar! He’s willing to help in anyway whether it’s mental or physical! Quote: [im supposed to say something? Um… I’ll listen to you if you need it!]
Joel/Chariot - About: Our newest addition and wonderful head of Clubs! You can go to anyone for club help but Joel specializes in it! He’ll help you if you have any problems with it! Quote: [Its all about Family you know? Except Scott, none of them are my family] (I helped you!! Come on!)
~~
-Clubs!-
Part 1
Part 2
~~
Warnings!:
This blog will contain-
•Swearing
•Trafficshipping
•Violence
•mentioned su1c1ce and etc
-everything more dark will be tagged accordingly!!
-That being said nothing in this blog is real!!! It’s all roleplay so please remember that!! Also author is a real person that has a busy life. I will not always post and that’s okay!! I have two other rp blogs, a fanfic, and a main blog to deal with. Not to mention school. This blog will also be a lot more art heavy so yea!
Thank you for reading! Hope to see you soon :)
-Your principal(and author) Lumy!
.
.
~~~
#%Well this will be fun..%#
^We’re back baby!!!^
:Yes indeed we are… but with the world reset.. everyone is um.. different now:
#%Yep. Well let’s enjoy it while it lasts%#
^Yea!!!^
~~~
Other!
-old Story~
-music playlist!
-nicknames 1
Art!
-Art masterpost on main
-minor gods
Intros!
-Goldie!
-Milly
-jay
-lazy
-mollys characters
-zeph
-naomi
-calane
-saph
~~~
Lastly but not least, this rp blog is inspired by @life-winners-liveblog and special thanks for botanicalbard and ??? For the motivation.
37 notes · View notes
prodigal-sunlight · 10 months ago
Note
What do u have against ai? :(
How much time do you have?
1. Generative AI is trained on the works of artists and writers without consent or compensation. It’s literally stealing from actual people. And no, it isn’t “learning like a real person” because it isn’t a real person. It’s a program that is incapable of creating anything new of its one. All generative AI is built on theft by corporations from small independent creators.
2. It uses considerably more power than most other current technology. Like, arguably it is worse for the environment than NFTs. The amount of water it wastes is absurd, the uptick in energy usage is absurd.
3. Corporations are salivating at the chance to cut creative people out of products. They don’t want to pay people for their work because they don’t respect art and artists. As long as we live under a capitalist system, people need to be able to own what they create and be able to provide for themself with their own skills.
4. The misinformation and disinformation generative AI can cause and has ALREADY caused is insane. People have had their faces and voices stolen without consent or compensation. People can generate believable deepfakes of politicians and social figures that will degrade the truth and potentially even damage our already messed up political climate. How would you feel if someone posted a realistic video of you praising a product you never bought? Or vouching for a politician you hate? Or saying you think all gay people should die?
5. This one is just personal, but I don’t care what a machine “makes.” Creativity is special to me because it lets you see the world through someone else’s eyes. Art of all kinds—writing, art, music, roleplay—is a kind of communication. I want to communicate with people, not an inanimate object mimicking what a person would be like. The joy of art comes from creation. Reducing it to only consumption is a disservice to all humankind.
Certain scientific fields have genuine uses for other kinds of AI, and I respect that. But Generative AI is built on theft and disrespect; at best its used for shallow art that someone didn’t care enough about to make themself, at worst its used for scams, disinformation, and stripping away even more of people’s rights.
I legitimately believe there is no current ethical uses for Generative AI. Will there be one day? Its possible, but I honestly find that unlikely. For now, though, if you are pro Generative AI, please unfollow me.
I may not be the most talented artist/writer out there, but I have enough self-respect that I don’t want people who see me as replaceable by machines engaging with my creative works. I put a lot of time, passion, and love into my work. Someone who sees that as equal in worth to something an algorithm spat out in five seconds is not welcome here.
47 notes · View notes
liliblogs26 · 6 months ago
Text
Jobs for Demigods.
Tumblr media
Hephaestus Cabin:
Dentist 
Mechanic
Vehicle/transportation designer.
Tech repair guy.
Inventar in general
Elon MusK, Jeff Besos, type people. Creating apps, phone brands, computers, luxury cars.
Plane piolet.
Train operator.
Those mf who make AI and realistic looking human-like robots.
Robotics in general.
Computer programming in general.
Hacker…. For the FBI?????
Working in data base.
Desk jobs.
Any online job.
Bronze and or scrap metal based sculptor.
Black smith… though this one was obvious.
Those mf who analyze antics(or any object they specialize in) and determine the vaul/wroth maybe even sell or resell the object. But specifically for tech, and weapons.
Selling they’re shit. Easy money.
25 notes · View notes
tsams-killswap-au · 7 months ago
Note
Hi Sun, hi Moon! :D How do you guys attend to the children? I know Sun had that Killcode, I imagine since it's a complicated code to derive from your normal programmed coding, it took a while to separate? I'd assume if it was like, super progressed, most likely it'd have taken.. at least a few weeks or maybe a month/more? :P
Also can I hug you guys :3
Sun: Oh!! Our first ask!! How exciting! 
Sun: Okay, okay, I practiced this, okay, *inhales*
Tumblr media
Sun: Here in the Superstar Daycare, we offer a safe and clean space for the children to play in! We provide toys and activities, such as arts and crafts, games, read-aloud times for books both entertaining and educational, movies and television shows, healthy meals for lunch and tasty treats for snacktime, and plenty of indoor playgrounds for the children to run out their energy! We have castles, the many tunnels of our play places, slides, and the ballpit of course! 
Sun: I supervise the children at all times and ensure they’re always playing safely and having fun! I lead during activities, and help ensure all of our little superstars have something to do and are comfortable! I’m equipped to care for children with special needs, or various allergies or conditions! In the event of an injury, I’m programmed with first aid training. 
Sun: And after hours, I clean the daycare and keep it washed, sanitized, and spotless!
Tumblr media
Sun: And MOON–!
Moon: … 
Sun: …
Moon: …
Sun: …
Tumblr media
Sun: –Isn't here a lot of the time. 
Moon: Nope! 
Moon: I just supervise naptime. (I think you mean I supervise naptime most of the time.) I make the kids lay down and sleep. (I usually end up doing that cuz he doesn’t show up for WORK very often–!) If they don’t wanna, moondrop candies work WONDERS. (At least that’s one of the NICER ways you deal with the kids.) 
Moon: I dunno what you want from me, I’m not good with kids. 
Sun: Says the guy I’ve caught reading bedtime stories and singing to the kids.
Tumblr media
Moon: *AHEM* They also want to know about the killcode and how we separated. 
Sun: Oh… they do? I don't, I don't, uh… 
Moon: I can take this part. 
Sun: Yeah… please…
Sun: …You're better at the sciency stuff anyways! 
Tumblr media
Moon: Alright, about that.
Tumblr media
Moon: The only thing I will say about Sun's Killcode is that IT wasn't the issue when it came to separating.
Tumblr media
Moon: What sucked about separating was that Sun and I were built in such a way that everything we were was intertwined. Our minds, our codes, all of it was built to rely on both halves being there. The body itself was designed to have two separate AIs, and it was a HUGE RISK for the body to only have one AI after one of us left. (In their infinite wisdom, whoever made us designed us that way intentionally. The dick.)
Tumblr media
Moon: Sun and I were so entangled that separating was a massive risk to us; if we weren't extremely careful, our minds could be damaged when being pulled apart from each other. So that was one thing, making sure we could unwind our codes without breaking either of us.
Moon: The Killcode wasn't a big deal when it came to this; that thing was just one bit of code among a million others.
Moon: Then I had to find a way to make sure Sun's body could survive only having the one AI instead of two, and THAT was a whole thing...
Tumblr media
Moon: I started researching this as soon as Sun and I stopped fighting  You really wanna know how long it took? It took over a YEAR! A year of research trying to figure this out, only a few hours each day, in the short periods of time when I was able to move when the lights were out. Once the lights came back on and Sun was back out, I couldn't DO anything.
Moon: It was HELL.
Moon: ...
Moon: But, as for when we finally had everything ready, the actual procedure itself only took about a day…
Moon: We were sitting around, hooked up to the computer down in parts and services, and the computer combed through our heads to make sure everything was in the right place.
Moon: (At least, that's what it was SUPPOSED to do... Except now we find out it might have made an error and given me too much of Sun's code... his killcode...)
Sun: (Um...)
Tumblr media
Moon: As for the Killcode, all you need to know is it's gone now. Sun doesn't have it anymore. 
Moon: That was a fair question to ask though.
Sun: *AHEM!* Okay, okay, that’s all they need to hear about THAT! Moving on!! 
Sun: …
Sun: Oh yeah there's one more part to the question! 
Sun: They wanna know if they can get a hug!
Moon: Oh do they.
Tumblr media
Sun: If you want to, sure! I give the kids hugs all the time, so why not! 
Moon: No you may not. Don't touch me.
28 notes · View notes
the-hydroxian-artblog · 10 months ago
Note
Regarding the last ask, where does Beth whole thing falls on the scale of sentience compared to Neuromorphs and Stochastic Parrots?
(Great denominations, btw. How did you come up with them?)
Beth's chip is special, as it's basically a flat neuromorphic chip with a ton of density (most are cubic/brick-like), way more dense with connections than the average robot's chip to compensate for its smaller size, but with a separate traditional processor for interfacing with the rest of her phone's systems. She's pretty much sapient.
There's also a sort of "clock" processor that's supposed to control how and when she thinks, "punishes" her with either withholding dopamine when she misbehaves or giving her the sensation of a controlled shock, etc. She canonically "jailbroke" and dismantled this clock, which on one hand freed her of her programming! but on the other hand, it's also what helped regulate some of her emotions, so uh. yeah. Her voice is purely the neuromorphic chip's output, which is why sometimes she outright says things she tries to keep hidden as mere thought. Her "body" is essentially a 3D model that reads the impulses her chip sends out, and reinterprets them as appropriate movements and actions, similar to VR motion tracking.
Both "Stochastic Parrot" and "Neuromoprhic" are terms used in AI research, so I'm basically just adapting them for my own setting. The term "neuromorphic" is a broad umbrella term that I'm using very liberally, since the technology for it is very much in its infancy, but the main idea is that it's a chip that can process information without the need of a "clock". Meaning, different connectors in the chip can do their processing thing at different times or "go dark" when not in use, much like how a human brain can have all sorts of different impulses going on at different times, and literally isn't supposed to use more than 10% of its grey matter at a time.
Current neuromorphic chips can be used to program robots to do simple things, like navigate mazes, but at way less of a power cost than robots with traditional CPUs. If we kept going all the way with developing this tech further, we'd have machines that could dynamically learn and change via reacting to stimuli rather than scraped training data off the internet, and at that point you're basically dealing with a Thing That Experiences. Simulated or not, that's no longer something just pretending to have impulses or reasoning. That's Just An Actual Little Guy as far as I'm concerned. Maybe only a little guy like how an insect or even an amoeba is a little guy, but that's enough of a little guy for me to call a neuromorph a little guy. You can think of Neuromorphs in general as people with prosthetic brains rather than traditionally programmed neural networks as we know them today. My intenion is also not to rule out that some seemingly stochastic parrots are conscious on some level, or some seemingly conscious neuromorphs aren't really all there at all. It's not a hardline thing, but everyone on all sides will certainly try to fit each other into boxes anyway.
53 notes · View notes
mostlysignssomeportents · 3 months ago
Text
Twinkump Linkdump
Tumblr media
I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me in SAN DIEGO at MYSTERIOUS GALAXY next MONDAY (Mar 24), and in CHICAGO with PETER SAGAL on Apr 2. More tour dates here.
Tumblr media
I have an excellent excuse for this week's linkdump: I'm in Germany, but I'm supposed to be in LA, and I'm not, because London Heathrow shut down due to a power-station fire, which meant I spent all day yesterday running around like a headless chicken, trying to get home in time for my gig in San Diego on Monday (don't worry, I sorted it):
https://www.mystgalaxy.com/32425Doctorow
Therefore, this is 30th linkdump, in which I collect the assorted links that didn't make it into this week's newsletters. Here are the other 29:
https://pluralistic.net/tag/linkdump/
I always like to start and end these 'dumps with some good news, which isn't easy in these absolutely terrifying times. But there is some good news: Wil Wheaton has announced his new podcast, a successor of sorts to the LeVar Burton Reads podcast. It's called "It's Storytime" and it features Wil reading his favorite stories handpicked from science fiction magazines, including On Spec, the magazine that bought my very first published story (I was 16, it ran in their special youth issue, it wasn't very good, but boy did it mean a lot to me):
https://wilwheaton.net/podcast/
Here's some more good news: a court has found (again!) that works created by AI are not eligible for copyright. This is the very best possible outcome for people worried about creators' rights in the age of AI, because if our bosses can't copyright the botshit that comes out of the "AI" systems trained on our work, then they will pay us:
https://www.yahoo.com/news/us-appeals-court-rejects-copyrights-171203999.html
Our bosses hate paying us, but they hate the idea of not being able to stop people from copying their entertainment products so! much! more! It's that simple:
https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/
This outcome is so much better than the idea that AI training isn't fair use – an idea that threatens the existence of search engines, archiving, computational linguistics, and other clearly beneficial activities. Worse than that, though: if we create a new copyright that allows creators to prevent others from scraping and analyzing their works, our bosses will immediately alter their non-negotiable boilerplate contracts to demand that we assign them this right. That will allow them to warehouse huge troves of copyrighted material that they will sell to AI companies who will train models designed to put us on the breadline (see above, re: our bosses hate paying us):
https://pluralistic.net/2024/03/13/hey-look-over-there/#lets-you-and-he-fight
The rights of archivists grow more urgent by the day, as the Trump regime lays waste to billions of dollars worth of government materials that were produced at public expense, deleting decades of scientific, scholarly, historical and technical materials. This is the kind of thing you might expect the National Archive or the Library of Congress to take care of, but they're being chucked into the meat-grinder as well.
To make things even worse, Trump and Musk have laid waste to the Institute of Museum and Library Services, a tiny, vital agency that provides funding to libraries, archives and museums across the country. Evan Robb writes about all the ways the IMLS supports the public in his state of Washington:
Technology support. Last-mile broadband connection, network support, hardware, etc. Assistance with the confusing e-rate program for reduced Internet pricing for libraries.
Coordinated group purchase of e-books, e-audiobooks, scholarly research databases, etc.
Library services for the blind and print-disabled.
Libraries in state prisons, juvenile detention centers, and psychiatric institutions.
Digitization of, and access to, historical resources (e.g., newspapers, government records, documents, photos, film, audio, etc.).
Literacy programming and support for youth services at libraries.
The entire IMLS budget over the next 10 years rounds to zero when compared to the US federal budget – and yet, by gutting it, DOGE is amputating significant parts of the country's systems that promote literacy; critical thinking; and universal access to networks, media and ideas. Put it that way, and it's not hard to see why they hate it so.
Trying to figure out what Trump is up to is (deliberately) confusing, because Trump and Musk are pursuing a chaotic agenda that is designed to keep their foes off-balance:
https://www.wired.com/story/elon-musk-donald-trump-chaos/
But as Hamilton Nolan writes, there's a way to cut through the chaos and make sense of it all. The problem is that there are a handful of billionaires who have so much money that when they choose chaos, we all have to live with it:
The significant thing about the way that Elon Musk is presently dismantling our government is not the existence of his own political delusions, or his own self-interested quest to privatize public functions, or his own misreading of economics; it is the fact that he is able to do it. And he is able to do it because he has several hundred billion dollars. If he did not have several hundred billion dollars he would just be another idiot with bad opinions. Because he has several hundred billion dollars his bad opinions are now our collective lived experience.
https://www.hamiltonnolan.com/p/the-underlying-problem
We actually have a body of law designed to prevent this from happening. It's called "antitrust" and 40 years ago, Jimmy Carter decided to follow the advice of some of history's dumbest economists who said that fighting monopolies made the economy "inefficient." Every president since, up to – but not including – Biden, did even more to encourage monopolization and the immense riches it creates for a tiny number of greedy bastards.
But Biden changed that. Thanks to the "Unity Taskforce" that divided up the presidential appointments between the Democrats' corporate wing and the Warren/Sanders wing, Biden appointed some of the most committed, effective trustbusters we'd seen for generations:
https://pluralistic.net/2022/10/18/administrative-competence/#i-know-stuff
After Trump's election, there was some room for hope that Trump's FTC would continue to pursue at least some of the anti-monopoly work of the Biden years. After all, there's a sizable faction within the MAGA movement that hates (some) monopolies:
https://pluralistic.net/2025/01/24/enforcement-priorities/#enemies-lists
But last week, Trump claimed to have illegally fired the two Democratic commissioners on the FTC: Alvaro Bedoya and Rebecca Slaughter. I stan both of these commissioners, hard. When they were at the height of their powers in the Biden years, I had the incredible, disorienting experience of getting out of bed, checking the headlines, and feeling very good about what the government had just done.
Trump isn't legally allowed to fire Bedoya and Slaughter. Perhaps he's just picking this fight as part of his chaos agenda (see above). But there are some other pretty good theories about what this is setting up. In his BIG newsletter, Matt Stoller proposes that Trump is using this case as a wedge, trying to set a precedent that would let him fire Federal Reserve Chair Jerome Powell:
https://www.thebignewsletter.com/p/why-trump-tried-to-fire-federal-trade
But perhaps there's more to it. Stoller just had Commissioner Bedoya on Organized Money, the podcast he co-hosts with David Dayen, and Bedoya pointed out that if Trump can fire Democratic commissioners, he can also fire Republican commissioners. That means that if he cuts a shady deal with, say, Jeff Bezos, he can order the FTC to drop its case against Amazon and fire the Republicans on the commission if they don't frog when he jumps:
https://www.organizedmoney.fm/p/trumps-showdown-at-the-ftc-with-commissioner
(By the way, Organized Money is a fantastic podcast, notwithstanding the fact that they put me on the show last week:)
https://audio.buzzsprout.com/6f5ly01qcx6ijokbvoamr794ht81
The future that our plutocrat overlords are grasping for is indeed a terrible one. You can see its shape in the fantasies of "liberatarian exit" – the seasteads, free states, and other assorted attempts to build anarcho-capitalist lawless lands where you can sell yourself into slavery, or just sell your kidneys. The best nonfiction book on libertarian exit is Raymond Criab's 2022 "Adventure Capitalism," a brilliant, darkly hilarious and chilling history of every time a group of people have tried to found a nation based on elevating selfishness to a virtue:
https://pluralistic.net/2022/06/14/this-way-to-the-egress/#terra-nullius
If Craib's book is the best nonfiction volume on the subject of libertarian exit, then Naomi Kritzer's super 2023 novel Liberty's Daughter is the best novel about life in a libertopia – a young adult novel about a girl growing up in the hell that would be life with a Heinlein-type dad:
https://pluralistic.net/2023/11/21/podkaynes-dad-was-a-dick/#age-of-consent
But now this canon has a third volume, a piece of design fiction from Atelier Van Lieshout called "Slave City," which specs out an arcology populated with 200,000 inhabitants whose "very rational, efficient and profitable" arrangements produce €7b/year in profit:
https://www.archdaily.com/30114/slave-city-atelier-van-lieshout
This economic miracle is created by the residents' "voluntary" opt-in to a day consisting of 7h in an office, 7h toiling in the fields, 7h of sleep, and 3h for "leisure" (e.g. hanging out at "The Mall," a 24/7, 26-storey " boundless consumer paradise"). Slaves who wish to better themselves can attend either Female Slave University or Male Slave University (no gender controversy in Slave City!), which run 24/7, with 7 hours of study, 7 hours of upkeep and maintenance on the facility, 7h of sleep, and, of course, 3h of "leisure."
The field of design fiction is a weird and fertile one. In his traditional closing keynote for this year's SXSW Interactive festival, Bruce Sterling opens with a little potted history of the field since it was coined by Julian Bleeker:
https://bruces.medium.com/how-to-rebuild-an-imaginary-future-2025-0b14e511e7b6
Then Bruce moves on to his own latest design fiction project, an automated poetry machine called the Versificatore first described by Primo Levi in an odd piece of science fiction written for a newspaper. The Versificatore was then adapted to the screen in 1971, for an episode of an Italian sf TV show based on Levi's fiction:
https://www.youtube.com/watch?v=tva-D_8b8-E
And now Sterling has built a Versificatore. The keynote is a sterlingian delight – as all of his SXSW closers are. It's a hymn to the value of "imaginary futures" and an instruction manual for recovering them. It could not be more timely.
Sterling's imaginary futures would be a good upbeat note to end this 'dump with, but I've got a real future that's just as inspiring to close us out with: the EU has found Apple guilty of monopolizing the interfaces to its devices and have ordered the company to open them up for interoperability, so that other manufacturers – European manufacturers! – can make fully interoperable gadgets that are first-class citizens of Apple's "ecosystem":
https://www.reuters.com/technology/apple-ordered-by-eu-antitrust-regulators-open-up-rivals-2025-03-19/
It's a good reminder that as America crumbles, there are still places left in the world with competent governments that want to help the people they represent thrive and prosper. As the Prophet Gibson tells us, "the future is here, it's just not evenly distributed." Let's hope that the EU is living in America's future, and not the other way around.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/03/22/omnium-gatherum/#storytime
Tumblr media
Image: TDelCoro https://www.flickr.com/photos/tomasdelcoro/48116604516/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
86 notes · View notes
striderl · 3 months ago
Note
your cyborgs, they are more human than i thought. I mean bully, gossip, tease, torment their own kind(reminds me of transformers). did they learn that from humans or are they actually humans transformed into cyborgs.
To clarify, a cyborg is a being with both biological and electronic components. In my classification of Skibidi Toilet’s hardware units, I divide them into three major kinds: fully mechanical robots, neural transferees, and half-organic cyborgs.
The members of the Rescue Squad 08 and the Filming Industry (Polaroid excluded) falls into the purview of fully mechanical robots — AI-driven entities built from scratch, without any organic origin. In my worldbuilding, most hardware agents are usually assigned a mentor after activation, where they undergo social and work-based training before deployment. The mentors plays a crucial role in shaping their behavior — a responsible mentor fosters maturity and human-like empathy, while an irresponsible one leaves them underdeveloped and prone to delinquency. This explains why some mechanical agents exhibit human-like traits, as these behaviors stem from observation and social conditioning rather than inherent programming. Unlike robots, half-organic cases were once human, but parts of their bodies have been replaced with cybernetic components. Examples include Polaroid and Lumix. In my headcanon, most cyborgs lose their memories after conversion due to the physical severing of their original heads. Additionally, memory wipes are often conducted to enhance performance — erasing past emotional ties prevents distraction and ensures their focus on the assigned mission. However, they retain core personality traits, muscle memory, and even subconscious preferences. There is some scientific speculation that the human heart stores certain memories, which explains lingering fragments of their past selves.
These cyborgs tend to be more physically sensitive and emotionally reactive than their fully mechanical counterparts, but this heightened human-like awareness comes at a cost — they are less durable in combat due to their biological components. 
This is the most complex category. Neural transferees were once fully human but had their consciousness transferred into a mechanical body — think of the process in Avatar. I have some theories regarding some Skibidi Toilet main casts being neural transferees, such as Plungerman (Dave), Plungerwoman (Cathy), TV Chief (Hakashita), and TV Woman. Among my own OCs, Komorebi and the large cam twins — Север (North) and Юг (South), also fall into this category.
Before the war, neural transfers were rare, and reserved for elite operatives and crucial intelligence personnel due to the complexity and cost of the process, the high risks and irreversibility of the process make it controversial and inaccessible to most. However, as the war escalated, the demand for specialized agents led the Alliance to start selecting human survivors — especially children — for the procedure. Children were preferred because their minds were more adaptable, making the transfer process smoother. Additionally, their vulnerability often made them a burden in survival camps, leading many groups to trade them to the Alliance in exchange for scarce resources.
North and South, for example, still think and act human despite losing their original bodies. They retained their memories and personalities, but the brutal reality of war forces them to mature faster than they should, sometimes propelling them to make decisions that seem cold and machina-like. The irony? The very process meant to preserve their humanity also destances them from it.
11 notes · View notes
mariacallous · 4 months ago
Text
For weeks, the Department of Government Efficiency, led by Elon Musk, the world’s richest man and a special government employee, has conducted a campaign to radically downsize the federal government and terminate numerous agency employees. Musk’s actions—including freezing federal grants, issuing an executive order offering employees paid resignation through Sept. 30, dismantling the U.S. Agency for International Development (USAID), and seizing control of massive databases with sensitive information on all Americans—have raised serious legal and constitutional questions.
Most of these controversial actions are tied up in court, with 25 cases filed so far—all but one ruling against Trump, while the other was dismissed on standing rather than merits. This has not fazed either President Trump or Elon Musk, both of whom have run businesses while facing frequent lawsuits. More legal challenges are expected, some likely reaching the Supreme Court.
The long-term goal is to expand presidential power under the theory of the Unitary Executive, which advocates for greater White House control over the government. Conservatives have pushed for this since the Reagan administration, and Trump hopes Musk’s actions will help advance it. This also explains why USAID was targeted first. Foreign aid is widely unpopular, with many Americans overestimating how much is spent on it. Thus, closing the agency would likely avoid public backlash, with the impact felt mostly by farmers—more on that later.
Cutting government budgets is broadly popular in theory, but if Trump and Musk overcome legal challenges and succeed in large-scale downsizing, they will find that cutting government can backfire on them. By using an axe instead of a scalpel, they run the risk of throwing out the baby with the bathwater and eliminating essential functions. In its haste, DOGE is likely to disrupt services the public supports, making the government less effective. History shows that major government failures are politically lethal, often more so than constitutional arguments. When both occur, they can create serious political problems for the president and the party in power.
What counts as a major government failure? The Carter administration’s botched rescue of the Iran hostages, the Bush administration’s mishandling of Hurricane Katrina, the Obama administration’s health care website crashes that delayed Obamacare sign-ups, the Biden administration’s chaotic withdrawal from Afghanistan, the Trump administration’s ineffective response to the COVID-19 pandemic, and many more. These are failures no president—no matter how skilled a communicator—can spin or deflect. Blaming predecessors or changing the subject won’t work when the public can clearly see that something critical went disastrously wrong.
If Trump’s Department of Government Efficiency plans survive legal challenges, several major failures could follow—each landing squarely on the president’s shoulders. As President Harry Truman famously said, “The buck stops here.”
Disruptions in distribution of Social Security and veterans’ benefits
The federal payment system, which DOGE briefly controlled before a judge intervened, oversees Social Security payments, among other critical functions. Disruptions to this system could have serious consequences.1 Plenty. DOGE has been using AI systems to explore these databases, but AI is prone to hallucinations.2 As Brookings scholar Darrell West warns “it is scary to use untested or poorly designed AI on government data sets not knowing how it makes decisions or where and how it was trained.” Unlike the turmoil at Twitter when Musk took over, disruptions in federal programs would have severe real-world consequences. A brief outage on X may be inconvenient, but delays or errors in Social Security, Medicaid, or veterans’ benefits payments could be devastating—especially for the roughly 40% of retired Americans over 60 who, as of 2013, relied solely on Social Security for income while working fewer than 30 hours per week.
Potential delays in tax refund processing
Disrupting the IRS database could have even more widespread consequences. In 2023, 69% of Americans received tax refunds from the IRS, with an average refund of $2,812. Many taxpayers intentionally over-withhold and rely on their refunds each spring for major purchases, like a new refrigerator. Now, imagine if, due to DOGE’s actions, refunds were delayed until November 2025 instead of arriving in the spring and summer. Or worse—if errors in newly written code caused refunds to be incorrect. Taxpayers would be outraged over delays or mistakes in something as crucial as their refunds, where the stakes are high and expectations are clear.
Increased tax evasion leading to reduced federal revenue
Currently, the IRS employs more than half of the Treasury Department’s workforce. Tax experts have long argued that the IRS needs more employees, not fewer. The “tax gap”—the estimated difference between what the IRS collects and what taxpayers actually owe—is estimated at $428 billion, with most of it resulting from underreporting and a smaller portion from non-filing. Increasing tax enforcement alone could significantly contribute to the Department of Government Efficiency’s goal of saving $1 trillion to $2 trillion.
In reality, the opposite is more likely. As my Brookings colleague Vanessa Williamson has noted, “Cutting the IRS is a top Republican priority.” With fewer staff, the risk of being audited decreases while the incentive to underreport increases. That’s not a sustainable way to reduce the deficit.
Increased risk of mortality from foodborne illnesses
The Food and Drug Administration (FDA) uses a “traceback system” to track foodborne illnesses. “Investigators trace food that ill people report eating all the way back to a farm or production facility,” the FDA explains. “Finding commonalities in the supply chains of foods eaten by ill people helps investigators zero in on a potential source of the outbreak.” Foodborne illnesses affect millions of Americans each year, leading to thousands of deaths—especially among the elderly and those with other health conditions. The traceback process is laborious, involving the FDA, Centers for Disease Control and Prevention, and state and local health departments. Widespread cuts could significantly undermine the government’s ability to identify the source of these outbreaks.
Further strain on an already fragile agricultural economy could exacerbate global food insecurity
In its rush to shut down USAID, DOGE risks further harming the already fragile farm economy. According to the Washington Post, “American farms…supply about 41% of the food aid that the agency, working with the U.S. Department of Agriculture, sends around the world each year, according to a 2021 report by the Congressional Research Service. In 2020, the U.S. government bought $2.1 billion in food aid from American farmers.”
On February 3, DOGE released a list of USAID-funded grants it claims could be categorized as waste and abuse. However, the total amount of targeted grants with specific numbers only adds up to roughly $12.1 million. Could these grants be cut without jeopardizing the $2.1 billion paid to American farmers and sent to people in need? The approach taken by DOGE is a clear example of discarding both the good and the bad, impacting both red and blue states alike.
Reducing intelligence personnel at the CIA or FBI could increase the risk of domestic terrorist attacks
In a surprising move on February 3, the CIA sent an unclassified email to the Office of Personnel Management (OPM) listing individuals hired over the past two years. Many of these hires were focused on strengthening U.S. intelligence on China, a growing national security concern. The email potentially exposed the identities of clandestine personnel, and OPM then used it to offer buyouts—making it the first national security agency to do so.
Reducing the CIA or FBI workforce in an era of asymmetric warfare, when threats to U.S. security can emerge from places most Americans have never heard of, may be, as intelligence expert David Ignatius put it, “the Trump administration’s most dangerous misstep.” The failure to prevent 9/11 was one of the most significant intelligence lapses in U.S. history—downsizing the CIA could risk a similar failure.
This list of potential failures extends across nearly every government agency. Musk acknowledges the likelihood of mistakes, saying no one can be perfect, and promises to fix problems quickly. But government operations are not like the tech industry—errors in issuing payments, tracking diseases, or ensuring aviation safety can have serious, sometimes life-threatening consequences. If DOGE indiscriminately slashes budgets and fires essential workers, it risks disaster. The fallout from major failures could hurt Trump’s poll numbers and weaken GOP support.
11 notes · View notes