Tumgik
#tescreal
Text
The real reason the studios are excited about AI is the same as every stock analyst and CEO who’s considering buying an AI enterprise license: they want to fire workers and reallocate their salaries to their shareholders
Tumblr media
The studios fought like hell for the right to fire their writers and replace them with chatbots, but that doesn’t mean that the chatbots could do the writers’ jobs.
Think of the bosses who fired their human switchboard operators and replaced them with automated systems that didn’t solve callers’ problems, but rather, merely satisficed them: rather than satisfying callers, they merely suffice.
Studio bosses didn’t think that AI scriptwriters would produce the next Citizen Kane. Instead, they were betting that once an AI could produce a screenplay that wasn’t completely unwatchable, the financial markets would put pressure on every studio to switch to a slurry of satisficing crap, and that we, the obedient “consumers,” would shrug and accept it.
Despite their mustache-twirling and patrician chiding, the real reason the studios are excited about AI is the same as every stock analyst and CEO who’s considering buying an AI enterprise license: they want to fire workers and reallocate their salaries to their shareholders.
-How the Writers Guild sunk AI's ship: No one's gonna buy enterprise AI licenses if they can't fire their workers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
6K notes · View notes
machine-saint · 9 months
Text
i think part of the reason i find the talk about TESCREAL strange is articles like this one that imply that like... homo sapiens ceasing to exist and our descendants being digital life would be something per se bad, regardless of how it occurs, and as someone who's a big fan of Greg Egan novels i just can't relate
also like part of the author's arguments that post humans might not be "better" is the statement
But posthumans would have their own flaws and shortcomings. Perhaps being five times "smarter" than us would mean they'd be five times better at doing evil. Maybe developing the technological means to indefinitely extend posthuman lifespans would mean that political prisoners could be tortured relentlessly for literally millions of years. Who knows what unspeakable horrors might haunt the posthuman world?
which seems like a very silly argument to me. like, this is an argument against education and life-saving medicine as a whole!
23 notes · View notes
alanshemper · 11 days
Text
Tumblr media Tumblr media
5 notes · View notes
nickjb · 6 months
Text
"It'd be amusing if these guys didn't have a combined net worth somewhere in the region of half a trillion euros and the desire to change the human universe, along with a load of unexamined prejudices and a bunch of half-baked politics they absorbed from the predominantly American SF stories they read in their teens."
9 notes · View notes
nando161mando · 7 months
Text
"This week on YNP! we talk to @xriskology about:
Transhumanism
Extropianism
Singularitarianism
Cosmism
Rationalism
Effective Altruism and
Longtermism"
via @slackbastard
0 notes
martinjost · 9 months
Text
Metaphern sind intellektuelle Transformatoren
„Die Anführungszeichen verschwanden irgendwann, bis der Psychologe Robert Epstein 1996 feststellte, dass es buchstäblich unmöglich geworden war, ohne digitalistische Metaphern über intelligentes menschliches Verhalten und ohne anthropomorphisierende Metaphern über Computer zu sprechen.
Metaphern sind intellektuelle Transformatoren, durch die der Strom des Denkens stets in beide Richtungen fließt.“
— Bovermann, Philipp (2023): Unser Wille geschehe. «Süddeutsche Zeitung» vom 4. August 2023.
1 note · View note
eightyonekilograms · 7 months
Text
This is related to @triviallytrue's recent post(s) about tech worker unionization, but distinct enough from it that I figured it was worth putting in its own post:
If you're not in tech and get all your impressions of tech workers either or people who are Extremely Online, you might not appreciate that an overwhelming majority of "tech workers" are just ordinary white-collar workers, with all that implies about their politics. Yes, tech has some die-hard libertarians, a handful of unhinged reactionaries, and a smattering of revolutionary communists, but the modal tech worker has the generic normie liberal politics you'd expect of a high-education, high-income PMC member.
Same goes for their life and interests: I've now worked full-time in tech for about twelve years, and I promise you that to a first approximation, zero of my coworkers have ever heard of Peter Thiel, radical life extension, TESCREAL or anything people put under that umbrella, and so on. They're aware of Elon Musk and vaguely annoyed at him, but don't think about him very often. My coworkers mostly have spouses and children and houses in the suburbs, and when the weekend is over they come back on Monday morning and talk about their hiking trip or the concert they saw.
214 notes · View notes
jadagul · 11 months
Text
Apparently the hot new acronym for hateable people is TESCREAL?
254 notes · View notes
intimate-mirror · 9 months
Text
Isn't it crazy how the two most powerful religions in the united states, integralist evangelicalism and TESCREAL, are both weird splinter sects of judaism?
60 notes · View notes
shieldfoss · 10 months
Text
So, TESCREAL
Is that what they call themself or is this yet another Republican exercise in putting all their enemies in 1 (one) box and putting a label on the box?
40 notes · View notes
not-terezi-pyrope · 5 months
Text
Don't mind me, just getting incredibly mad about Timnit Gebru's "TESCREAL" talk again.
You know, I will agree with her, there is a real problem with the upper class capitalist elite using ideas like Effective Altruism and Longtermism to make warped judgements that justify the centralization of power. There is a problem of overvaluing concerns like AI existential risk over the non-hypothetical problems that require more resources in the world today. There is a problem with medical paradigms that fetishize intelligence and physical ability in a way that echoes 20th century eugenicist rhetoric.
But what Gebru's talk/paper, which have sickeningly become a go-to leftist touchpoint for discussing tech, slanderously conflates whole philosophical movements into a "eugenics conspiracy" that is so myopically flattening that you have her arguing that things like the concept of "being rational" are modern eugenics. Forget transhumanism as radical self-determinism and self-modification, increasing human happiness by overcoming our biology, TESCREALs just want to make themselves superior (modern curative medical science is excluded from this logic, being tangible instead of speculative and thus too obviously good). Forget the fight to reduce scarcity, TESCREALs true agenda is to exploit minorities to enrich corporations! Forget trying to do good in the world, didn't you hear that Sam Bankman-Fried called himself an EA and yet was a bad guy? And safety in AI research? Nonsense, this is just part of the TESCREAL mythology of the AI godhead!
Gebru takes real problems in a bunch of fields and the culture surrounding them - problems that people are trying to address, including nominally her! - and declares a conspiracy where these problems are the flattened essence of these movements, essentially giving up on trying to improve matters. It's an argument supported by loose aesthetic associations and anecdotal cherrypicking, by taking tech CEOs at their word because they have the largest platform instead of contemplating that perhaps they have uniquely distorted understandings of the concepts they invoke, and a sneering condescension at anyone who placed in the "tech bro" box through aesthetic similarity.
I hate it, I hate it, I hate it, I hate it.
15 notes · View notes
Text
AI ethics vs AI "safety"
Tumblr media
They are hemorrhaging a river of cash, but that river’s source is an ocean-sized reservoir of even more cash.
To keep that reservoir full, the AI industry needs to convince fresh rounds of “investors” to give them hundreds of billions of dollars on the promise of a multi-trillion-dollar payoff.
That’s where the “AI Safety” story comes in. You know, the tech bros who run around with flashlights under their chins, intoning “ayyyyyy eyeeeee,” and warning us that their plausible sentence generators are only days away from becoming conscious and converting us all into paperclips.
It’s pure criti-hype: “Our technology is so powerful that it endangers the human race, which is why you should both invest in it and use it to replace all of your workers.”
This form of criticism is entirely distinct from the legitimate realm of “AI ethics,” whose emphasis is on how bad AI is at the things that will supposedly generate those promised trillions. Things like bias, low-quality training data, training data attacks, data ordering attacks, adversarial examples, the endless stream of confident lies, and the high degree of supervision they necessitate.
Add to that the exploitative labor pipeline, the environmental damage, and the public safety risks and a very different critique emerges —one that’s grounded in AI’s shortcomings, not the supposed risks arising from its incredible power.
-How the Writers Guild sunk AI's ship: No one's gonna buy enterprise AI licenses if they can't fire their workers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
910 notes · View notes
machine-saint · 6 months
Text
i think i've made this post before but:
the big reason i have a hard time taking the "tescreal is a cult that's taking over silicon valley!!!" rhetoric seriously is that the set of ideologies tescreal points to haven't really had any... significant effect on the world. people meme about how "techbros" are terrified of roko's basilisk and paperclipping but if you look at what's actually happening you see that AI development is only accelerating. cryonics is still a joke niche field nobody takes seriously as far as i can tell. musk talks about going to mars but hasn't even gotten to the moon. everything that has happened is also completely compatible with and explained by the generic libertarian techno-optimism that's dominated the tech industry for the past 30 years or so.
i also find a lot of writing by e.g. Torres, who i think is one of the big anti-tescreal writers, suspect, as noted here.
edit: also i just saw timnit gebru fav a post comparing an EU commission tweeting out "mitigating the risk of extinction from global AI should be a global priority" to the fucking fourteen words. and this is one of the other prominent figures in the field! this is giving me significant negative confidence in your analysis!
11 notes · View notes
alanshemper · 11 months
Text
Tumblr media Tumblr media
2 notes · View notes
abr · 9 months
Text
«Transumanesimo, Estropianesimo, Singolaritanismo, Cosmismo, Razionalismo, Altruismo Efficace e Lungotermismo»: il TESCREAL è un acronimo che riunisce queste 7 formule (...). Il termine (é) nato come critica contro il mondo distopico della Valley (...). Nello specifico, l’estropianismo è la sua “corrente” (...) più criptica (...). L’entropia come noto è una grandezza fisica che descrive il grado di disordine di un sistema e che aumenta da sola: di contro invece, il termine “estropia” (...) vorrebbe intendere la “direzione opposta del movimento”. Si passa dal caos all’ordine e poi ancora oltre, sempre più in alto (...). I due giovani “pionieri” del TESCREAL nel 1988 definirono cinque principi fondamentali per l’estropianismo: “espansione senza confini, autotrasformazione, ottimismo dinamico, tecnologia intelligente e ordine spontaneo“. Qualcosa di scherzoso all’inizio ma decenni dopo, con lo sviluppo di ChatGPT e altre intelligenze artificiali, quell’ideologia rischia davvero di diventare religione dalla Silicon Valley verso il resto del globo. (...) Nick Bostrom, uno dei principali scienziati esterofili dell’epoca, scrisse che il transumanesimo non era una religione, «sebbene assuma alcune delle funzioni per le quali le persone hanno tradizionalmente usato le religioni». (...) Ad oggi le due più importanti aziende di AI nel mondo – Google e OpenAI – sono guidate da “sacerdoti estropianisti”, miliardari che credono i limiti della biologia debbano essere superati (...). Un’ideologia religiosa che soppianti l’umano per “superarlo”: come scriveva il “Foglio” qualche mese fa trattando il tema del TESCREAL (...).
via https://www.ilsussidiario.net/news/estropianesimo-che-cose-lideologia-tescreal-silicon-valley-religione-nata-per-caso-ora-inquieta-con-lai/2574423/
Chi si trovi immerso nel vortice spiralante a velocità sempre più alta dello sviluppo di frontiera, fatalmente sente di dover inquadrare quel che fa e le direzioni in qualcosa di olistico, meta scientifico -> "religioso".
Chi invece sta nelle retrovie di periferia a consumare, crede di non aver più bisogno di religione - e consuma a sua insaputa i surrogati più comici della superstizione: peggio degli oroscopi è l'abbi fede nella scenza, l' Ipse dixit contemporaneo.
26 notes · View notes
phaeton-flier · 10 months
Text
(Responding here as the original post has gotten too long)
@ideologyhobocop
I think the main problem with groupings like TESCREAL is that they're vague linkages of overlaps and sometimes friendships and sometimes similar social circles. If you want to call that "recognizing social trends", then OK, but what does that actually predict? As soon as you need to get useful information out of that grouping, you either need to fall back to it being a jumped-up hookup graph or start being very wrong about what actual individuals believe.
"Transhumanism" alone covers a wide variety of people who believe and desire a wide variety of things. The commie trans girl who wants cool robot legs and the writer who wants to look younger and Peter Theil might all be describable by "Transhumanist", but if you start saying one has the political beliefs of the other you've gone off the map!
Similarly, sure, multibillionaries spending money on space travel over third-world inoculations is bad; Is all space travel travel bad, or is NASA a different enough organization that maybe we should just talk about each separately instead of grouping them together under the same umbrella of criticism?
And, for a more specific point, we got started when you refered to people not wanting to do die as doing so "out of a stark, narcissistic terror of the world continuing on without you, [who] have a pitifully weak understanding of anything outside [their] head" when apparently what you actually meant was "It's bad that billionaires' wealth isn't going towards fighting disease". If you treated "life-extension" and "wealthy people" as two different groups that overlap, you wouldn't be in danger of that mistake.
Doing useful analysis requires more than just showing group A overlaps with group B at a more-than-chance rate; It requires showing that that linkage is important, and that it also implies things about B that are true of A.
20 notes · View notes