Tumgik
#effective altruism
Text
The real AI fight
Tumblr media
Tonight (November 27), I'm appearing at the Toronto Metro Reference Library with Facebook whistleblower Frances Haugen.
On November 29, I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Tumblr media
Last week's spectacular OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between "Effective Altruism" (doomers) and "Effective Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, nor an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationists who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:
https://locusmag.com/2020/07/cory-doctorow-full-employment/
As Molly White writes, this isn't much of a debate. The "two sides" of this debate are as similar as Tweedledee and Tweedledum. Yes, they're arrayed against each other in battle, so furious with each other that they're tearing their hair out. But for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they've split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse:
https://newsletter.mollywhite.net/p/effective-obfuscation
White points out that there's another, much more distinct side in this AI debate – as different and distant from Dee and Dum as a Beamish Boy and a Jabberwork. This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to existential risk from a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."
After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic "decision support" systems and other AI-based automation is pretty poor – like the claims-evaluation engine that Cigna uses to deny insurance claims:
https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot.
There's that old programmer joke, "There are 10 kinds of people, those who understand binary and those who don't." But of course, that joke could just as well be, "There are 10 kinds of people, those who understand ternary, those who understand binary, and those who don't understand either":
https://pluralistic.net/2021/12/11/the-ten-types-of-people/
What's more, the joke could be, "there are 10 kinds of people, those who understand hexadecenary, those who understand pentadecenary, those who understand tetradecenary [und so weiter] those who understand ternary, those who understand binary, and those who don't." That is to say, a "polarized" debate often has people who hold positions so far from the ones everyone is talking about that those belligerents' concerns are basically indistinguishable from one another.
The act of identifying these distant positions is a radical opening up of possibilities. Take the indigenous philosopher chief Red Jacket's response to the Christian missionaries who sought permission to proselytize to Red Jacket's people:
https://historymatters.gmu.edu/d/5790/
Red Jacket's whole rebuttal is a superb dunk, but it gets especially interesting where he points to the sectarian differences among Christians as evidence against the missionary's claim to having a single true faith, and in favor of the idea that his own people's traditional faith could be co-equal among Christian doctrines.
The split that White identifies isn't a split about whether AI tools can be useful. Plenty of us AI skeptics are happy to stipulate that there are good uses for AI. For example, I'm 100% in favor of the Human Rights Data Analysis Group using an LLM to classify and extract information from the Innocence Project New Orleans' wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Automating "extracting officer information from documents – specifically, the officer's name and the role the officer played in the wrongful conviction" was a key step to freeing innocent people from prison, and an LLM allowed HRDAG – a tiny, cash-strapped, excellent nonprofit – to make a giant leap forward in a vital project. I'm a donor to HRDAG and you should donate to them too:
https://hrdag.networkforgood.com/
Good data-analysis is key to addressing many of our thorniest, most pressing problems. As Ben Goldacre recounts in his inaugural Oxford lecture, it is both possible and desirable to build ethical, privacy-preserving systems for analyzing the most sensitive personal data (NHS patient records) that yield scores of solid, ground-breaking medical and scientific insights:
https://www.youtube.com/watch?v=_-eaV8SWdjQ
The difference between this kind of work – HRDAG's exoneration work and Goldacre's medical research – and the approach that OpenAI and its competitors take boils down to how they treat humans. The former treats all humans as worthy of respect and consideration. The latter treats humans as instruments – for profit in the short term, and for creating a hypothetical superintelligence in the (very) long term.
As Terry Pratchett's Granny Weatherwax reminds us, this is the root of all sin: "sin is when you treat people like things":
https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html
So much of the criticism of AI misses this distinction – instead, this criticism starts by accepting the self-serving marketing claim of the "AI safety" crowd – that their software is on the verge of becoming self-aware, and is thus valuable, a good investment, and a good product to purchase. This is Lee Vinsel's "Criti-Hype": "taking press releases from startups and covering them with hellscapes":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Criti-hype and AI were made for each other. Emily M Bender is a tireless cataloger of criti-hypeists, like the newspaper reporters who breathlessly repeat " completely unsubstantiated claims (marketing)…sourced to Altman":
https://dair-community.social/@emilymbender/111464030855880383
Bender, like White, is at pains to point out that the real debate isn't doomers vs accelerationists. That's just "billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":
https://dair-community.social/@emilymbender/111464024432217299
All of this is just a distraction from real and important scientific questions about how (and whether) to make automation tools that steer clear of Granny Weatherwax's sin of "treating people like things." Bender – a computational linguist – isn't a reactionary who hates automation for its own sake. On Mystery AI Hype Theater 3000 – the excellent podcast she co-hosts with Alex Hanna – there is a machine-generated transcript:
https://www.buzzsprout.com/2126417
There is a serious, meaty debate to be had about the costs and possibilities of different forms of automation. But the superintelligence true-believers and their criti-hyping critics keep dragging us away from these important questions and into fanciful and pointless discussions of whether and how to appease the godlike computers we will create when we disassemble the solar system and turn it into computronium.
The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."
Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.
It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
287 notes · View notes
mitchipedia · 5 months
Text
Last week’s spectacular OpenAI fight was reportedly a donnybrook between “Effective Altruism” and “Effective Accelarationism”—two schools of philosophy founded on the nonsensical faith, absent any evidence, that godlike artificial intelligence (AI) beings are imminent, and arguing over the best way to prepare for that day.
Cory Doctorow @mostlysignssomeportents :
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we’ll get a locomotive….
But for people who don’t take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they’ve split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse….
Left out of this argument are the real abuses of artificial intelligence and automation today, which (Cory says, quoting Molly White) “is incredibly convenient for the powerful individuals and companies who stand to profit from AI.”
AI and automation can be used for a great deal of good and a great deal of evil—and it already is being used for both, Cory says. We need to focus the discussion on that.
Like Cory, I think it’s entirely possible that we may achieve human-level AI one day, and that AI might become superintelligent. That might happen today, it might happen in a thousand years, it might never happen at all. The human race has other things to worry about now.
111 notes · View notes
skitterstan · 15 days
Text
Tumblr media
59 notes · View notes
lilbluntworld · 2 years
Text
Tumblr media
432 notes · View notes
gacorley · 20 days
Text
The thing that gets me about Effective Altruism that I haven't seen in public conversations, is when they're talking about how you can have the most impact by making more money to give more to charity, I have to think like, okay, but what about the people actually doing the thing.
Like, I know this is not an original argument, it's just not spelled out in videos and whatnot, but when you're spending tons of money on your calculated highest impact charitable cause, you are still paying other people to do things. Money doesn't magically make things better. The actual work has to be done by hundreds or thousand of people across the world. Even if you're giving money directly to the poor (a good thing, and quite effective), you've got a whole lot of administrative work being done to get that money where you need.
Like, it's a philosophy that not everyone can follow, not just because not everyone is lucky enough to get that money, but because there still have to be people to actually go out in the world and do the thing.
7 notes · View notes
razzek · 1 month
Text
A very good article about how effective altruism is pretty much bullshit. The whole time I was reading it I was just like "why do they never ask the people what they want and need?" Cuz man. That's my everyday life. Randos deciding to "help" you by doing something that makes them feel good and screws up what I was doing at best, puts me in actual danger at worst.
9 notes · View notes
spider-artdump · 4 months
Text
Tumblr media
16 notes · View notes
etirabys · 1 year
Text
I've been psyching myself out of the 10% donation pledge for... um... 7 years, uncomplicatedly because I am too attached to my money.
I think I'm going to try to kill one of two birds with one stone at any given month, and experiment with giving 5% of my monthly pretax income (likely split between GiveDirectly and whatever animal charity GiveWell says is most effective per dollar these days) every month that I don't publish any fanfiction to AO3, since apparently not writing fiction is slowly poisoning my spirit. Or something
(runs calculator) Wow, that's only $210 per month, that almost feels like a non-painful amount of money to click away
Anyway, I'll start the clock on this in March, and stop it – kind of whenever, I'm not going to bind myself to an end date, but I expect to reevaluate at the end of the year.
55 notes · View notes
Text
thesis: effective altruism means healthcare interventions for people in third-world countries
antithesis: actually, effective altruism means maximizing the odds that the human species will expand to the stars so they may utilize (hah!) as much energy in our light cone as possible! and to do this we need to, idk, give grant money to space startups?
synthesis: people of equal talent to einstein have lived and died in cotton fields and sweatshops, the best way to secure that light cone is through third-world healthcare interventions
38 notes · View notes
Link
Now, the most extreme version of this philosophy gaining a huge amount of power and influence resides in Silicon Valley, where upper middle class technologists believe it is their duty to repopulate the planet with more upper middle class white technologists. Simone and Michael Collins are leading the charge, out to have 10 children as quickly as possible, who they will indoctrinate to each have 10 of their own, and so on. They hope that, within 11 generations, their genetic legacy will outnumber the current human population.
Both Silicon Valley graduates, the Collins have created matchmaking service for wealthy elite looking to have huge families, and an education institute for these “gifted” children. They say it is vital the 0.1% have large families so that these gifted children can save humanity, pushing a dangerous rhetoric that we need a particular kind of people being born, people who are educated a certain way, with access to resources and decision-making spaces, and, of course, white.
Of course, the Collins don’t say outright that these people must be white, but why else propagate one’s own lineage rather than adopt 10 children, desperately in need of those same resources, who already exist?
The movement looks an awful lot like white supremacy dressed up as techno-utopian utilitarianism. And it’s garnering traction. Elon Musk may not be directly affiliated with the Collins’ particular brand, but the father of 10 is an ardent believer that our biggest danger is population collapse, and regularly tweets out the necessity for certain demographics to have more children. He, like many others, is what Nandita calls “ecologically blind.”
There are millions of children in need around the world, and billions of people living in sub-standard conditions who need access to more resources. Many of these people are attempting to enter the nations clamouring to figure out how to reverse this “population decline”. This desperate racism not only highlights yet more evidence of the pathology of inequality and oppression the West built its nations upon, but also draws into question the now vs then problem.
2K notes · View notes
Link
I am on record on the subject of science fiction writers predicting the future: we do not. Thank goodness we don’t predict the future! If the future were predictable, then nothing any of us did would matter, because the same future would arrive, just the same. The unpredictability of the future is due to human agency, something the best science fiction writers understand to their bones. Fatalism is for corpses.
(One time, at a science fiction convention, I was on a panel with Robert Silverberg, discussing this very topic, and the subject of Heinlein’s belief in his predictive powers came up. “Oh,” Silverberg sniffed, “you mean Robert A. Timeline?” He’s a pistol!)
Science fiction does something a lot more interesting than predicting the future — sometimes, it inspires people to make a given future, and sometimes, it sparks people to action to prevent a given future.
Mostly, though, I think science fiction is a planchette on a vast, ethereal Ouija board on which all our fingers rest. We writers create all the letters the planchette can point at, all the futures we can imagine, and then readers’ automotor responses swing the planchette towards one or another, revealing their collective aspirations and fears about the future.
But sometimes, if you throw enough darts, you might hit the target, even if the room is pitch black and even if you’re not sure where the target is, or whether there even is a target.
Lately, I’ve been thinking about three times I managed to, well, not predict the future, but at least make a lucky guess. These three stories — all on the web — have been much on my mind lately, because of how they relate to crisis points in our wider world.
In chronological order they are:
Nimby and the D-Hoppers (2003)
Other People’s Money (2007)
Chicken Little (2011)
Read the rest
260 notes · View notes
limeadeislife · 8 months
Text
Something I've been thinking about: as someone who agrees with the principles of effective altruism, should I avoid applying for jobs in the Development/Fundraising departments of nonprofit orgs that aren't considered among the most effective charities? Because if I get one of those jobs, I'd basically be spending my working life trying to convince rich people to direct their money towards something other than the most life-saving thing they could do with it, right?
This question is prompted by the fact that my mom sometimes goes on job sites and sends me postings that she thinks I should apply for, and since she knows I'm interested in nonprofits and have worked for a nonprofit before, a lot of the stuff she sends me ends up being Development jobs at random local organizations
11 notes · View notes
utilitymonstermash · 12 days
Text
I was never an Effective Altruist proper, but I did drop $5 in a Anti-Malaria jar at an event in Berkeley once. Are there any Pro-Malaria charities I can donate to to offset?
2 notes · View notes
jbeshir · 2 years
Text
On People Who Hate You and Effective Altruism
Something that I learned a while ago providing support for people doing online fiction, was that as soon as your fandom gets reasonably big, people will emerge who want to hurt you. There's something you've written or done they don't like, something in (what they believe to be) your messaging that they wish other people would listen to less, some way that they think you're crap and they have the personality type that finds it intensely frustrating when other people don't agree with them and wants to try to get others to do so. These people are a really small minority, but 0.01% of people is enough for an expected one once you have 10,000 people paying attention to you, an expected ten once you have 100,000 people, and so on.
And they'll be wildly unfair to you, send you abusive things, say things optimised to hurt you emotionally, go around trying to convince other people to hate you too. Sometimes put frankly remarkable amounts of work and time into it. And it's really important that you can disengage with people who want to hurt you, as far as you can.
A norm that people must engage with criticism, a never-shifted heuristic that everyone hates criticism and must continually try to engage more with criticism to try to be less that, a cultural assumption that whenever someone says a criticism isn't well-founded it means they're closed-minded and failing to give it enough charity is, I think, toxic, really psychologically unhealthy, and essentially community-mandated social anxiety. It's important that you're allowed to stop interacting with people being unfair and unpleasant to you, and that it's an allowable thought that they might be being that.
I'm pretty sure this dynamic doesn't actually change when you're a group like EA trying to accomplish good instead of just an artist or author. It now matters that you do engage with reasonable critics- a creator isn't really doing anything wrong if they just ignore all feedback but you can't- but the dynamic that produces people who hate you still exists, it's still very harmful for your wellbeing and functioning to be pressured into always believing they must be in-some-sense-right and you must be in-some-sense-wrong, and it's still very important you're allowed to say "this is wildly unfair, and I don't have to engage with it".
I think people wildly disagree about whether EA has a problem with ignoring critics. I don't personally think it does- "do we pay enough attention to critics" is a recurring topic, and my personal view is that it's rare to see anything even remotely grounded that hasn't been debated to death (e.g. "systemic change" discourse) get ignored in public. I think this post more or less aligns with my experience. But it's very reasonable for people to think this problem does exist and push for more attention to critics in general.
However, I think that some people who think it does, or might, are pressing for the opposite-mistake-to-that. They think the bigger problem is incorrectly ignoring critics, so any time they see someone decling to engage with something, they don't look at whether it's unfair, and mean, and whether that's a reasonable choice. They just jump straight to "that's closed minded and bad for the community".
And I think this, actually, is also a bad thing to do to people. It would still be a bad thing to do even if you were right about the broader trend. Having scrupulosity issues, anxiety, default assumption that anyone saying anything bad about you must be right, should not in fact be a mandatory part of being in EA spaces.
In judging whether something should be virtuously engaged with, whether the good thing to do is to spend your energy on it rather than anything else, you cannot escape actually putting the effort in to judge: Was this critique accurate or not? Was it informed, or uninformed? Was it, if none of those things, at least motivated by a desire to help, or by a desire to make more people hate you because they dislike you for reasons litigated elsewhere? And if it's not, and you try to press people to worry about it more, you're not being the good force for community health you think you are.
55 notes · View notes
lilbluntworld · 2 years
Text
I. THE RATE OF LOSS OF POTENTIAL LIVES
As I write these words, suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives.
Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.
“Astronomical Waste: The Opportunity Cost of Delayed Technological Development” Nick Bostrom 2003
64 notes · View notes
msilverstar · 5 months
Text
Fandom (kinda) and Open AI
So, the new CEO of Open AI is a guy named Emmet Shear, who is featured in a quick cameo at the end of Harry Potter and the Methods of Rationality. That is an AU fanfic where Harry's uncle is a nice philosophy professor who teaches him to calculate odds on everything with cold logic and rationality, and the first year at Hogwarts goes very differently. It was not a great fanfic, and certainly not our kind that centers characters, emotions and relationships, but a lot of people really loved it .
It was written as an introduction to a cult of "rationality:, which led to Effective Altruists: the ones who think climate change is no big deal but are terrified of the tiny chance they calculate that computers will become conscious (AGI - Artificial General Intelligence) and choose to destroy humanity. They are also really into cryptocurrency, like Sam Bankman-Freid. I don't really understand why they are the same guys who are doing all the AI research and starting all the AI companies, but I suspect they want some of the billions of dollars sloshing around and are arrogant enough to think they can avert any bad consequences.
This timeline is truly weird.
6 notes · View notes