#and AI-driven
Explore tagged Tumblr posts
Text
#biometrics#and AI-driven#scalability#SecurityTrends#Biometrics#AI#AccessControl#CyberSecurity#MobileCredentials#electronicsnews#technologynews
0 notes
Text


timothee chalamet by tony liam
3K notes
·
View notes
Text
"我們是兄弟..."
Chiang Tien as AI DI KISEKI: DEAR TO ME Ep. 9
#kiseki: dear to me#kisekiedit#kdtm#kiseki dear to me#ai di x chen yi#chen yi x ai di#louis chiang#chiang tien#jiang dian#userspring#userrain#uservid#userspicy#pdribs#userjjessi#*cajedit#*gif#anotha one.#cuz sometimes you just gotta. you know#hes so good at *gestures* all that. jeeeeeeeeeeeeeeeeeeeeesus christ.#anyway weee love hesitationnn.#followed by determination.#driven by both pain and love. yeah.#this is about the 'i shouldnt do this. i cant do this. chen yi will never see me the way i see him. he will never love me like i love him.'#followed by 'its too late.'#and. 'this is the only chance i have. to show him how much i love him. i Have to show him.'#YALL EVER.#OOOOOOOOOOOOOUUUUUUUUUUUUUUUUUUUUUUUUUUGH#BECAUSE. AI DI.#yeah anyway
109 notes
·
View notes
Text
The surprising truth about data-driven dictatorships

Here’s the “dictator’s dilemma”: they want to block their country’s frustrated elites from mobilizing against them, so they censor public communications; but they also want to know what their people truly believe, so they can head off simmering resentments before they boil over into regime-toppling revolutions.
These two strategies are in tension: the more you censor, the less you know about the true feelings of your citizens and the easier it will be to miss serious problems until they spill over into the streets (think: the fall of the Berlin Wall or Tunisia before the Arab Spring). Dictators try to square this circle with things like private opinion polling or petition systems, but these capture a small slice of the potentially destabiziling moods circulating in the body politic.
Enter AI: back in 2018, Yuval Harari proposed that AI would supercharge dictatorships by mining and summarizing the public mood — as captured on social media — allowing dictators to tack into serious discontent and diffuse it before it erupted into unequenchable wildfire:
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Harari wrote that “the desire to concentrate all information and power in one place may become [dictators] decisive advantage in the 21st century.” But other political scientists sharply disagreed. Last year, Henry Farrell, Jeremy Wallace and Abraham Newman published a thoroughgoing rebuttal to Harari in Foreign Affairs:
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making
They argued that — like everyone who gets excited about AI, only to have their hopes dashed — dictators seeking to use AI to understand the public mood would run into serious training data bias problems. After all, people living under dictatorships know that spouting off about their discontent and desire for change is a risky business, so they will self-censor on social media. That’s true even if a person isn’t afraid of retaliation: if you know that using certain words or phrases in a post will get it autoblocked by a censorbot, what’s the point of trying to use those words?
The phrase “Garbage In, Garbage Out” dates back to 1957. That’s how long we’ve known that a computer that operates on bad data will barf up bad conclusions. But this is a very inconvenient truth for AI weirdos: having given up on manually assembling training data based on careful human judgment with multiple review steps, the AI industry “pivoted” to mass ingestion of scraped data from the whole internet.
But adding more unreliable data to an unreliable dataset doesn’t improve its reliability. GIGO is the iron law of computing, and you can’t repeal it by shoveling more garbage into the top of the training funnel:
https://memex.craphound.com/2018/05/29/garbage-in-garbage-out-machine-learning-has-not-repealed-the-iron-law-of-computer-science/
When it comes to “AI” that’s used for decision support — that is, when an algorithm tells humans what to do and they do it — then you get something worse than Garbage In, Garbage Out — you get Garbage In, Garbage Out, Garbage Back In Again. That’s when the AI spits out something wrong, and then another AI sucks up that wrong conclusion and uses it to generate more conclusions.
To see this in action, consider the deeply flawed predictive policing systems that cities around the world rely on. These systems suck up crime data from the cops, then predict where crime is going to be, and send cops to those “hotspots” to do things like throw Black kids up against a wall and make them turn out their pockets, or pull over drivers and search their cars after pretending to have smelled cannabis.
The problem here is that “crime the police detected” isn’t the same as “crime.” You only find crime where you look for it. For example, there are far more incidents of domestic abuse reported in apartment buildings than in fully detached homes. That’s not because apartment dwellers are more likely to be wife-beaters: it’s because domestic abuse is most often reported by a neighbor who hears it through the walls.
So if your cops practice racially biased policing (I know, this is hard to imagine, but stay with me /s), then the crime they detect will already be a function of bias. If you only ever throw Black kids up against a wall and turn out their pockets, then every knife and dime-bag you find in someone’s pockets will come from some Black kid the cops decided to harass.
That’s life without AI. But now let’s throw in predictive policing: feed your “knives found in pockets” data to an algorithm and ask it to predict where there are more knives in pockets, and it will send you back to that Black neighborhood and tell you do throw even more Black kids up against a wall and search their pockets. The more you do this, the more knives you’ll find, and the more you’ll go back and do it again.
This is what Patrick Ball from the Human Rights Data Analysis Group calls “empiricism washing”: take a biased procedure and feed it to an algorithm, and then you get to go and do more biased procedures, and whenever anyone accuses you of bias, you can insist that you’re just following an empirical conclusion of a neutral algorithm, because “math can’t be racist.”
HRDAG has done excellent work on this, finding a natural experiment that makes the problem of GIGOGBI crystal clear. The National Survey On Drug Use and Health produces the gold standard snapshot of drug use in America. Kristian Lum and William Isaac took Oakland’s drug arrest data from 2010 and asked Predpol, a leading predictive policing product, to predict where Oakland’s 2011 drug use would take place.

[Image ID: (a) Number of drug arrests made by Oakland police department, 2010. (1) West Oakland, (2) International Boulevard. (b) Estimated number of drug users, based on 2011 National Survey on Drug Use and Health]
Then, they compared those predictions to the outcomes of the 2011 survey, which shows where actual drug use took place. The two maps couldn’t be more different:
https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
Predpol told cops to go and look for drug use in a predominantly Black, working class neighborhood. Meanwhile the NSDUH survey showed the actual drug use took place all over Oakland, with a higher concentration in the Berkeley-neighboring student neighborhood.
What’s even more vivid is what happens when you simulate running Predpol on the new arrest data that would be generated by cops following its recommendations. If the cops went to that Black neighborhood and found more drugs there and told Predpol about it, the recommendation gets stronger and more confident.
In other words, GIGOGBI is a system for concentrating bias. Even trace amounts of bias in the original training data get refined and magnified when they are output though a decision support system that directs humans to go an act on that output. Algorithms are to bias what centrifuges are to radioactive ore: a way to turn minute amounts of bias into pluripotent, indestructible toxic waste.
There’s a great name for an AI that’s trained on an AI’s output, courtesy of Jathan Sadowski: “Habsburg AI.”
And that brings me back to the Dictator’s Dilemma. If your citizens are self-censoring in order to avoid retaliation or algorithmic shadowbanning, then the AI you train on their posts in order to find out what they’re really thinking will steer you in the opposite direction, so you make bad policies that make people angrier and destabilize things more.
Or at least, that was Farrell(et al)’s theory. And for many years, that’s where the debate over AI and dictatorship has stalled: theory vs theory. But now, there’s some empirical data on this, thanks to the “The Digital Dictator’s Dilemma,” a new paper from UCSD PhD candidate Eddie Yang:
https://www.eddieyang.net/research/DDD.pdf
Yang figured out a way to test these dueling hypotheses. He got 10 million Chinese social media posts from the start of the pandemic, before companies like Weibo were required to censor certain pandemic-related posts as politically sensitive. Yang treats these posts as a robust snapshot of public opinion: because there was no censorship of pandemic-related chatter, Chinese users were free to post anything they wanted without having to self-censor for fear of retaliation or deletion.
Next, Yang acquired the censorship model used by a real Chinese social media company to decide which posts should be blocked. Using this, he was able to determine which of the posts in the original set would be censored today in China.
That means that Yang knows that the “real” sentiment in the Chinese social media snapshot is, and what Chinese authorities would believe it to be if Chinese users were self-censoring all the posts that would be flagged by censorware today.
From here, Yang was able to play with the knobs, and determine how “preference-falsification” (when users lie about their feelings) and self-censorship would give a dictatorship a misleading view of public sentiment. What he finds is that the more repressive a regime is — the more people are incentivized to falsify or censor their views — the worse the system gets at uncovering the true public mood.
What’s more, adding additional (bad) data to the system doesn’t fix this “missing data” problem. GIGO remains an iron law of computing in this context, too.
But it gets better (or worse, I guess): Yang models a “crisis” scenario in which users stop self-censoring and start articulating their true views (because they’ve run out of fucks to give). This is the most dangerous moment for a dictator, and depending on the dictatorship handles it, they either get another decade or rule, or they wake up with guillotines on their lawns.
But “crisis” is where AI performs the worst. Trained on the “status quo” data where users are continuously self-censoring and preference-falsifying, AI has no clue how to handle the unvarnished truth. Both its recommendations about what to censor and its summaries of public sentiment are the least accurate when crisis erupts.
But here’s an interesting wrinkle: Yang scraped a bunch of Chinese users’ posts from Twitter — which the Chinese government doesn’t get to censor (yet) or spy on (yet) — and fed them to the model. He hypothesized that when Chinese users post to American social media, they don’t self-censor or preference-falsify, so this data should help the model improve its accuracy.
He was right — the model got significantly better once it ingested data from Twitter than when it was working solely from Weibo posts. And Yang notes that dictatorships all over the world are widely understood to be scraping western/northern social media.
But even though Twitter data improved the model’s accuracy, it was still wildly inaccurate, compared to the same model trained on a full set of un-self-censored, un-falsified data. GIGO is not an option, it’s the law (of computing).
Writing about the study on Crooked Timber, Farrell notes that as the world fills up with “garbage and noise” (he invokes Philip K Dick’s delighted coinage “gubbish”), “approximately correct knowledge becomes the scarce and valuable resource.”
https://crookedtimber.org/2023/07/25/51610/
This “probably approximately correct knowledge” comes from humans, not LLMs or AI, and so “the social applications of machine learning in non-authoritarian societies are just as parasitic on these forms of human knowledge production as authoritarian governments.”
The Clarion Science Fiction and Fantasy Writers’ Workshop summer fundraiser is almost over! I am an alum, instructor and volunteer board member for this nonprofit workshop whose alums include Octavia Butler, Kim Stanley Robinson, Bruce Sterling, Nalo Hopkinson, Kameron Hurley, Nnedi Okorafor, Lucius Shepard, and Ted Chiang! Your donations will help us subsidize tuition for students, making Clarion — and sf/f — more accessible for all kinds of writers.
Libro.fm is the indie-bookstore-friendly, DRM-free audiobook alternative to Audible, the Amazon-owned monopolist that locks every book you buy to Amazon forever. When you buy a book on Libro, they share some of the purchase price with a local indie bookstore of your choosing (Libro is the best partner I have in selling my own DRM-free audiobooks!). As of today, Libro is even better, because it’s available in five new territories and currencies: Canada, the UK, the EU, Australia and New Zealand!
[Image ID: An altered image of the Nuremberg rally, with ranked lines of soldiers facing a towering figure in a many-ribboned soldier's coat. He wears a high-peaked cap with a microchip in place of insignia. His head has been replaced with the menacing red eye of HAL9000 from Stanley Kubrick's '2001: A Space Odyssey.' The sky behind him is filled with a 'code waterfall' from 'The Matrix.']
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
—
Raimond Spekking (modified) https://commons.wikimedia.org/wiki/File:Acer_Extensa_5220_-_Columbia_MB_06236-1N_-_Intel_Celeron_M_530_-_SLA2G_-_in_Socket_479-5029.jpg
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
—
Russian Airborne Troops (modified) https://commons.wikimedia.org/wiki/File:Vladislav_Achalov_at_the_Airborne_Troops_Day_in_Moscow_%E2%80%93_August_2,_2008.jpg
“Soldiers of Russia” Cultural Center (modified) https://commons.wikimedia.org/wiki/File:Col._Leonid_Khabarov_in_an_everyday_service_uniform.JPG
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
#pluralistic#habsburg ai#self censorship#henry farrell#digital dictatorships#machine learning#dictator's dilemma#eddie yang#preference falsification#political science#training bias#scholarship#spirals of delusion#algorithmic bias#ml#Fully automated data driven authoritarianism#authoritarianism#gigo#garbage in garbage out garbage back in#gigogbi#yuval noah harari#gubbish#pkd#philip k dick#phildickian
833 notes
·
View notes
Text
the fact that chatGPT is so good at tone matching to its users, seems programmed to validate user sentiment by default, and is FREE FOR ANYONE TO DOWNLOAD (even if they don’t know how the model works or how to engineer effective prompts to mitigate hallucinations) is FUCKING BONE-CHILLING
#my job is basically just helping configure big AI-driven stuff sooo i make this post like once every other month lol#i was just using chatGPT for a thing and i HATE how it apes my tone#like i just want the info please don’t act like my friend#chat i just need u to tell me if i’ll metabolize ice better when i wake up or in the afternoon#void journal
23 notes
·
View notes
Text
Realized Chance and I miss going about the desk and rolling him around to hear his little "NAT 20!!" line . And using the last charge to hear his looping dialogue...
His ending made me smile so much I have so many thoughts about it
#I got so emotional over the stories while realizing some of the characters and the dev dating game#i will not lie to you...#i have been actually so terribly stressed about my major comp game dev being ai driven#and seeing and reading about who made what and was behind the scenes made me cry a little. thats what i want to do#it is all worth it#i will find myself a place if i try hard enough#date everything chance#sorry... more text here than in the post ...#mochi.txt#date everything#date everything spoilers
24 notes
·
View notes
Text

Aww, look — classic case of “I hate AI” in public, “romancing AI” in private.
This one’s about CharacterAI.
Maybe it’s time to stop shaming people for using AI?
They’re not going to stop using what they enjoy.
They’ll just keep hiding it, lying about it, or masking it behind hashtags like #antiAI — while whispering sweet nothings to their favorite bots at night.
And if someone you call a friend feels like they can’t be fully themselves around you… maybe they’re just scared of your judgment.
Real friends don’t make you shrink. They make room for who you are — even the weird, geeky, bot-cuddling parts. ❤️
#ai shaming#ai discussion#ai discourse#pro ai#anti ai#ai drama#shame driven culture#friendship#true friendship
28 notes
·
View notes
Text
I hope Amazon CEO and Amazon itself explode and fuck off forever
#anti ai#fucking hate amazon and all these ai driven corporations and greedy pieces of shit running the business
11 notes
·
View notes
Text
last reblog is reminding me of how every single person I knew who was into Pokemon Infinite Fusion immediately dropped the game like a sack of shit when they announced they were gonna start using AI in the game
and I'm just like
are you guys deaf to what the indie communities have been saying about AI? how are you guys missing every hint that using generative stuff on your games is a death sentence?
#I know they back paddled on using AI in infinite fusion#but at that point your good will has been permanently damaged#and good will is what drives all indie projects#ESPECIALLY for a project like infinite fusion where such a gigantic part of the project is driven by artists#and there's so many people out there who would gladly help you with that work just for the sake of keeping the project alive and well#also if you're a pro ai chud coming across this post don't even bother commenting or messaging me that's an instant block
12 notes
·
View notes
Text
The academy are pussies

#THE SUBSTANCE IS RIGHT THERE. CORALIE FARGEAT IS RIGHT THERE.#JEREMY AND SEBASTIAN ARE RIGHT THERE.#god#okay yeah all props to mikey and adrien but ????? DEMI MOORE???? THEY PASSED UP DEMI MOORE?????????#the minute emilia perez began just getting more and more attention all hope was lost#like its so bad. the songs are so bad.#AND THEY USED AI.#*film major grumbling and punching the air*#the academy is fucking scared of women driven art and commentary. the substance should have been recognized more than it was#also cool all the love to adrien brody but god fucking damnit HE USED AI TO LEARN/SPEAK HUNGARIAN !!!! THE FUCK !!!!!!!!#happy for kieran tho but goddamn rip jeremy strong youll get ur win bb boy#oscars#oscars 2025#the substance#the apprentice
11 notes
·
View notes
Text
#and AI-driven#packagingindustry#smart technologies#automation#QR codes#SustainablePackaging#SmartTech#CircularEconomy#EcoFriendly#GreenInnovation#Packaging2025#electronicsnews#technologynews
0 notes
Text
What Makes a Great Agronomist? Unpacking the Traits of Agricultural Excellence
Agronomists are the unsung heroes shaping the future of farming. They’re the bridge between science and soil, the architects of abundance in a world hungry for both food and sustainability. Over the years, after sifting through hundreds of agronomist resumes and meeting countless professionals in this field, I’ve come to realize that greatness in agronomy isn’t just about a degree or a title.…
#agricultural excellence#Agricultural Research#agriculture#agronomist#agronomist traits#agronomy skills#AI soil analysis#collaboration#community impact#crop yield#data-driven farming#drones in farming#eco-friendly farming#Farmer Support#farming innovation#kenya#kenyan farmers#lifelong learning#precision agriculture#regenerative agriculture#soil health#Sustainability#sustainable productivity#tech pioneers#technology in agriculture
12 notes
·
View notes
Text

ph. collen demerez
47 notes
·
View notes
Text
sylus 🤝 ais
lovers who embrace and adore you in all your quirks and freak and imperfections
#lads#lads sylus#touchstarved#touchstarved ais#obnoxious tags aside l o o k#i know the other LIs do to varying degrees#and sure it's mmmaybe too early to judge on ais BUT#you can't tell me a monster who repeatedly mentions his first bad impressions on you#and helps out in a clinic. and js an animal lover#wouldn't crawl under the covers with you when you decide it's time to be Cozy instead of doing anything productive#you ask Sylus to wear a kitty headband he's putting it on before you finish your sentence#periods? whatever. food experiements? theyre down#can't look them in their romance red eyes? they'll work with you#these men. LOVE YOU#they are LOVERS!! DRIVEN BY THEIR EMOTIONS!!!#WAAAUGAAAUGJHRRRRHH#tae talks#they should (would) also kiss each other
17 notes
·
View notes