#literally as an AI specifically representing chatgpt/image generators
Explore tagged Tumblr posts
Text
I initially thought the entity plot in DR was extremely goofy, like what do you mean a computer “controls the very truth itself” (<- extremely annoying grad student voice), and what a hysterical thing for the CIA of all agencies to be concerned about, but honestly I’m on board with it now because after rewatching dead reckoning again, it is so clearly a movie responding to concerns about like, digitally mediated medical + political disinformation during covid, and the fact that the background tension animating the film is that every government wants to get control of the entity in order to centralise and use this disinformation mechanism does feel very much like a response to the political responses to the pandemic (particularly from the US/UK/Canada/etc). obviously I’m not expecting any deeper engagement with like the political production of truth and knowledge or anything like that but this anxiety feels more cohesive and legitimate than I initially figured
#mi.txt#mission impossible#mi7#saw people say DR is about AI and I just disagree lol#like obviously that element is there but thematically I don’t think that is the concern being engaged with#1) because the current AI panic is more recent than when this film was shot and 2) the actual impact of disinformation spreading through#social media as a result of content moderation + far right cultivation on eg Facebook Twitter/X etc during COVID#is much wider reaching and has much larger consequences socially politically culturally and so on than like. chatgpt lol#The entity being an AI is I think more productively understood as a centralising metaphor for this threat & not meant to be understood#literally as an AI specifically representing chatgpt/image generators#meta
23 notes
·
View notes
Note
Is AWAY using it's own program or is this just a voluntary list of guidelines for people using programs like DALL-E? How does AWAY address the environmental concerns of how the companies making those AI programs conduct themselves (energy consumption, exploiting impoverished areas for cheap electricity, destruction of the environment to rapidly build and get the components for data centers etc.)? Are members of AWAY encouraged to contact their gov representatives about IP theft by AI apps?
What is AWAY and how does it work?
AWAY does not "use its own program" in the software sense—rather, we're a diverse collective of ~1000 members that each have their own varying workflows and approaches to art. While some members do use AI as one tool among many, most of the people in the server are actually traditional artists who don't use AI at all, yet are still interested in ethical approaches to new technologies.
Our code of ethics is a set of voluntary guidelines that members agree to follow upon joining. These emphasize ethical AI approaches, (preferably open-source models that can run locally), respecting artists who oppose AI by not training styles on their art, and refusing to use AI to undercut other artists or work for corporations that similarly exploit creative labor.
Environmental Impact in Context
It's important to place environmental concerns about AI in the context of our broader extractive, industrialized society, where there are virtually no "clean" solutions:
The water usage figures for AI data centers (200-740 million liters annually) represent roughly 0.00013% of total U.S. water usage. This is a small fraction compared to industrial agriculture or manufacturing—for example, golf course irrigation alone in the U.S. consumes approximately 2.08 billion gallons of water per day, or about 7.87 trillion liters annually. This makes AI's water usage about 0.01% of just golf course irrigation.
Looking into individual usage, the average American consumes about 26.8 kg of beef annually, which takes around 1,608 megajoules (MJ) of energy to produce. Making 10 ChatGPT queries daily for an entire year (3,650 queries) consumes just 38.1 MJ—about 42 times less energy than eating beef. In fact, a single quarter-pound beef patty takes 651 times more energy to produce than a single AI query.
Overall, power usage specific to AI represents just 4% of total data center power consumption, which itself is a small fraction of global energy usage. Current annual energy usage for data centers is roughly 9-15 TWh globally—comparable to producing a relatively small number of vehicles.
The consumer environmentalism narrative around technology often ignores how imperial exploitation pushes environmental costs onto the Global South. The rare earth minerals needed for computing hardware, the cheap labor for manufacturing, and the toxic waste from electronics disposal disproportionately burden developing nations, while the benefits flow largely to wealthy countries.
While this pattern isn't unique to AI, it is fundamental to our global economic structure. The focus on individual consumer choices (like whether or not one should use AI, for art or otherwise,) distracts from the much larger systemic issues of imperialism, extractive capitalism, and global inequality that drive environmental degradation at a massive scale.
They are not going to stop building the data centers, and they weren't going to even if AI never got invented.
Creative Tools and Environmental Impact
In actuality, all creative practices have some sort of environmental impact in an industrialized society:
Digital art software (such as Photoshop, Blender, etc) generally uses 60-300 watts per hour depending on your computer's specifications. This is typically more energy than dozens, if not hundreds, of AI image generations (maybe even thousands if you are using a particularly low-quality one).
Traditional art supplies rely on similar if not worse scales of resource extraction, chemical processing, and global supply chains, all of which come with their own environmental impact.
Paint production requires roughly thirteen gallons of water to manufacture one gallon of paint.
Many oil paints contain toxic heavy metals and solvents, which have the potential to contaminate ground water.
Synthetic brushes are made from petroleum-based plastics that take centuries to decompose.
That being said, the point of this section isn't to deflect criticism of AI by criticizing other art forms. Rather, it's important to recognize that we live in a society where virtually all artistic avenues have environmental costs. Focusing exclusively on the newest technologies while ignoring the environmental costs of pre-existing tools and practices doesn't help to solve any of the issues with our current or future waste.
The largest environmental problems come not from individual creative choices, but rather from industrial-scale systems, such as:
Industrial manufacturing (responsible for roughly 22% of global emissions)
Industrial agriculture (responsible for roughly 24% of global emissions)
Transportation and logistics networks (responsible for roughly 14% of global emissions)
Making changes on an individual scale, while meaningful on a personal level, can't address systemic issues without broader policy changes and overall restructuring of global economic systems.
Intellectual Property Considerations
AWAY doesn't encourage members to contact government representatives about "IP theft" for multiple reasons:
We acknowledge that copyright law overwhelmingly serves corporate interests rather than individual creators
Creating new "learning rights" or "style rights" would further empower large corporations while harming individual artists and fan creators
Many AWAY members live outside the United States, many of which having been directly damaged by the US, and thus understand that intellectual property regimes are often tools of imperial control that benefit wealthy nations
Instead, we emphasize respect for artists who are protective of their work and style. Our guidelines explicitly prohibit imitating the style of artists who have voiced their distaste for AI, working on an opt-in model that encourages traditional artists to give and subsequently revoke permissions if they see fit. This approach is about respect, not legal enforcement. We are not a pro-copyright group.
In Conclusion
AWAY aims to cultivate thoughtful, ethical engagement with new technologies, while also holding respect for creative communities outside of itself. As a collective, we recognize that real environmental solutions require addressing concepts such as imperial exploitation, extractive capitalism, and corporate power—not just focusing on individual consumer choices, which do little to change the current state of the world we live in.
When discussing environmental impacts, it's important to keep perspective on a relative scale, and to avoid ignoring major issues in favor of smaller ones. We promote balanced discussions based in concrete fact, with the belief that they can lead to meaningful solutions, rather than misplaced outrage that ultimately serves to maintain the status quo.
If this resonates with you, please feel free to join our discord. :)
Works Cited:
USGS Water Use Data: https://www.usgs.gov/mission-areas/water-resources/science/water-use-united-states
Golf Course Superintendents Association of America water usage report: https://www.gcsaa.org/resources/research/golf-course-environmental-profile
Equinix data center water sustainability report: https://www.equinix.com/resources/infopapers/corporate-sustainability-report
Environmental Working Group's Meat Eater's Guide (beef energy calculations): https://www.ewg.org/meateatersguide/
Hugging Face AI energy consumption study: https://huggingface.co/blog/carbon-footprint
International Energy Agency report on data centers: https://www.iea.org/reports/data-centres-and-data-transmission-networks
Goldman Sachs "Generational Growth" report on AI power demand: https://www.goldmansachs.com/intelligence/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf
Artists Network's guide to eco-friendly art practices: https://www.artistsnetwork.com/art-business/how-to-be-an-eco-friendly-artist/
The Earth Chronicles' analysis of art materials: https://earthchronicles.org/artists-ironically-paint-nature-with-harmful-materials/
Natural Earth Paint's environmental impact report: https://naturalearthpaint.com/pages/environmental-impact
Our World in Data's global emissions by sector: https://ourworldindata.org/emissions-by-sector
"The High Cost of High Tech" report on electronics manufacturing: https://goodelectronics.org/the-high-cost-of-high-tech/
"Unearthing the Dirty Secrets of the Clean Energy Transition" (on rare earth mineral mining): https://www.theguardian.com/environment/2023/apr/18/clean-energy-dirty-mining-indigenous-communities-climate-crisis
Electronic Frontier Foundation's position paper on AI and copyright: https://www.eff.org/wp/ai-and-copyright
Creative Commons research on enabling better sharing: https://creativecommons.org/2023/04/24/ai-and-creativity/
217 notes
·
View notes
Text
So while I was in Switzerland, I had the really neat opportunity to visit my daughter’s college and sit in on two big project presentations.
I was… really, really disturbed by the amount of AI the students use. Like. Ok for the one class that everyone hated and no one gave a shit about, I totally understand the students turning to AI to help fill in info for a boring presentation that most of them apparently only started the night before. Sure. It’s shitty but I get it, honestly probably I would have snuck a few assignments in this way myself in a few different classes.
But then, even in the fun class project that everyone was excited about, there was still SO MUCH AI usage. And not even secret “I hope the teacher doesn’t notice this is all from chat gpt”, like, proud AI usage, like, yes, my final project is just a whole bunch of images that I got from AI, then collected together as if this represents actual work that I did. Shocking. Especially with me coming from an art college and imagining what my teachers would have said if I had ever DARED to hand in a stack of AI images and called it my project.
The teachers seem to not really know what to do about AI. Just last year they were teaching the students about the possibilities of this interesting new technology, and now just one year later, the teachers are dealing with a pile of worthless assignments written by chatgpt. The students are using it whether the teachers like it or not, and there’s not much you can do to stop them from doing it, and even if you wanted to stop them, in many cases it’s hard or even impossible to tell if it’s been used or not. But I had a really interesting conversation with one of my daughter’s teachers about it.
If AI can write papers and presentations for students, maybe instead of figuring out how to crack down on AI usage, the schools really need to radically shift what they are grading and why. If AI can spit out whatever you want, then what is it that the human brings to the equation?
So for instance: in this particular class, the students had an extremely broad assignment to make a project analyzing and transforming European mythology into a contemporary setting, somehow. Some of the students came up with some really interesting projects ranging from children’s books, a video game, a hypothetical interactive map of European dragon locations, etc. One student really did just literally output a big stack of dragon pictures generated by AI, bound them into a book, and called that their project. On the other hand, my daughter made a journal of an explorer who had got lost. Now she did use AI to generate the first draft for some of her journal entries (which made me grumble - I think even I, who does not write, would have enjoyed writing the entries from scratch?) but the idea of the journal was hers, the narrative it told was her idea, and she aged the pages and illustrated them with diagrams and bound the book in leather by hand and presented her project with an entire display of hypothetical found objects and an explanation of the narrative of the hero’s journey, and how her journal fit into that.
So like. AI might be able to write papers and make pictures, but AI can’t supply the thought process behind it, or the gathering of the physical objects and the tactile experience of the book. The specifically human work is what’s valuable, and schools have a challenge now to reevaluate the kinds of assignments given and they way they are accessed.
Or something. Made me think a lot.
7 notes
·
View notes
Text
"pro-ai" people. I think you ought to unpack that a tad. I think their might be some assumptions in there doing an awful lot of heavy lifting that may not be all that valuable.
All of your examples require an internet connection, and for those services to be live on the web. Even templates, you either have to download them or the specialized software that runs them.
All of these require information access on remote servers, and they each require their own, individual process, and for those individual services to be maintained in a way that is free or affordable.
An llm is a generalist. that can do all of these, and a lot more things, in one installation, that can still be on your machine and run if your connection goes down. Should you take its medical advice? Probably not. Is error checking over? No. Is that still very handy to format an email when FormatAnEmail.com is down? uh. yeah.
The choice of whether or not to use AI should depend on the task and how good AI is at it and the other resources at your disposal.
this false "pro-ai" and "anti-ai" dichotomy is silly, and is predicated on an assumption that "AI" is always going to meet some criteria that is objectionable in all cases for all reasonable people that isn't true.
I see where this argument comes from, but the evils of "ai" are actually the evils of like 3 companies and I would argue its more useful as consumers to recall that, considering these companies are trying to sell it to you. But these evils aren't true because of the ai. They're true because of the companies trying to win an arms race.
"Ai is evil"
not even true of most llms or most llm research.
It applies with the most parity to the 2 or 3 models we see being advertised the most, and the companies attached in the public zeitgeist, but those models are fairly extreme outliers for their size, complexity, resource use, and the audacity of their holding companies. They don't really represent a representative sample of the qualities any LLM *must* have or will have. Most research in llms in the last several years has been about making them smaller and more resource efficient.
I would argue there is no useful "anti-ai" and "pro-ai" identity. There are consumers angry about specific models and generalizing those shortcomings to an entire paradigm of computing based on the first one or two things they've heard about "AI" and deciding they're all of the devil, and... people who didn't do that.
"Using ai" does not necessarilly mean chatgpt. We have language models that are open source. That are local. That are private. That do not put money directly in the hands of any company. One that's 260 something megabytes for the weights. a hundred or so million parameters in comparison to chatgpts estimated trillions. LLMs run all gamuts of size, resource use, "intelligence" and data provinance and once training is done the model itself is just basically a ROM image. There is very little you can say about the implementation of chatgpt that would apply to every llm, not now, not a year ago, and especially not moving forward.
It's a tool. Not a demon. And for things the tool is less good at, finetuning the tool is entirely possible and is accessible to anyone who can follow a basic python tutorial. we are not at the mercy of these companies or their doings to provide ourselves access to the tool nor are we necessarily culpable to any problem you can name in AI unless we specifically choose to be. And we can choose specifically not to be and still use AI by using more ethical ai models. Usage of an LLM does not preclude that foul play is being rewarded in some way with patronage to any specific dealer.
Deciding there's "Anti-AI" and "Pro-AI" is just pretend internet politics. It's two straw men jerking off while they hold eye contact to assert dominance. People who want to use AI in a more ethical way were already capable of doing that, for... literal years. and no one who's thinking very critically and is abreast of the options available is wearing either of those hats with their whole pussy because they just don't make any sense. You can't really hear "using ai" and be sure something that should be boycotted is happening. And I personally am not deciding to fight over how someone writes their grocery list. I literally just. could not be fucked to care about anything much less. "This nasty bitch wrote her grocery list with token generation" idgaf. I just... don't.
I think a lot of what pro-AI people are really wanting is stuff that already exists but they don't know it's out there like
can't format a work email? templates
don't know how to write a resume? templates
writing a thank you card or a condolences card or a wedding invitation? templates templates templates
not sure how to format your citations in MLA or whatever format? citationmachine.net
summary of something you're reading for school/work? cliffsnotes.com
recipe based on ingredients in your fridge? whatsintherefrigerator.com
there's a million more like, guys, we don't need AI, we never needed generative AI
71K notes
·
View notes