four-ravens-in-a-trenchcoat
four-ravens-in-a-trenchcoat
[Good title here]
2K posts
I'm Raven (she/they), Swedish (in case you wonder about the occasional posts in Swedish) | Welcome to the chaos blog, this is where I hoard all the posts I like and sometimes create stuff | The beautiful icon creature is made with this picrew: picrew.me/share?cd=RzvZ0TCMB9
Don't wanna be here? Send us removal request.
Text
my sewing machine does exactly one thing, and yet every single time I try to sew something with it I feel like I've put my baby in the pelican's mouth. over and over again.
5K notes · View notes
Text
220 notes · View notes
Text
Tumblr media
38K notes · View notes
Text
Tumblr media Tumblr media
consider this field HARVESTED
347 notes · View notes
Photo
Tumblr media
(via Ravelry: AMerryweather’s Blanket of British birds)
7K notes · View notes
Text
How much water does chatGPT actually use?
I've been curious about the actual chatGPT water consumption numbers for a while and then I ran across this post about fact checking (op is @gloamses and reblog by @stylish-suidae, tagging you to show your post inspired me) and decided to do the research.
Tl;dr: gloamses says you should fact check things before sharing, especially when you agree with the point made. stylish-suidae tells us about the "chatGPT uses 1-3 bottles of water per query" figure and how the "fact" has undergone a game of telephone with gradual embellishments. The article linked in the OP of the post popularizing the number says 500 ml for every 5-50 queries. They provide a link on water use in data centers as a starting point for figuring out if that's a lot.
So, the reblog mentions a datacenterknowledge article which introduces water usage effectiveness (WUE) for data centers, a measure of how much water is used in a data center compared to how much energy its IT equipment uses. A lower number is good, since it's more efficient. The page also says that WUE calculations sometimes include water usage for non-cooling reasons like toilets in the building if there is no specific water meter for the cooling system.
A problem with WUE is that it doesn't account for water source (drinking water or treated wastewater which would otherwise be let out into local waters?) or if used water goes through wastewater treatment or is let out into nature if possible (the issue leading to water being let out seems to mostly be mineral buildup and the cooling process doesn't typically pollute the water).
This 2025 article by TechTarget claims average WUE is 1,8 liters/kWh. It also goes over numbers released from some big tech companies: Amazon claims 0,15 l/kWh and Microsoft 0,3 l/kWh which is a lot lower than average, though the article also says these numbers have been disputed and might be higher in reality. It also says these companies have compensation programmes for water usage, which also shows how WUE isn't everything.
How water efficient is chatGPT then?
The 500 ml of water per 5-50 queries seems to come from a 2023 paper: Making AI Less “Thirsty”: Uncovering and Addressing the
Secret Water Footprint of AI Models. This paper discusses AI water footprint in three scopes: on-site use for cooling, off-site use from electricity generation (mostly cooling of power plants) and water usage from server hardware manufacturing (which is apparently a lot).
It also differentiates between water withdrawal ("freshwater taken from the ground or surface water sources") and water consumption ("the amount of water 'evaporated, transpired, incorporated into products or crops, or otherwise removed from the immediate water environment'"). A lot of water used for cooling evaporates, but you can only reuse the water a few times before changing it to fresh water due to risk for mineral and bacteria buildup, leading to a big water consumption footprint.
They then use GPT-3 as a case study: There is no data on manufacturing water footprint, but they estimate how many requests 500 ml of water will get you if processed in different global data centers, factoring in on-site and off-site use for local conditions.
According to the paper, it's possible to get 10,5 - 70,4 requests per 500 ml of water depending on the location, which translates to 47,6 - 7,1 ml of water per request. The median is 30,5 requests per 500 ml, or 16,4 ml per request, which might be a better figure to use.
The paper summarizes this, in another place, as "GPT-3 needs to “drink” (i.e., consume) a 500ml bottle of water for roughly 10 – 50 medium-length responses, depending on when and where it is deployed". This is more queries than the 5 - 50 figure which has been cited elsewhere, cutting the maximum amount from 100 ml to 50 ml, though 16,4 ml is more realistic for comparison.
Now, what is a request in this calculation? They define it as "a medium-sized request, each with approximately ≤800 words of input and 150 – 300 words of output".
And what estimate of power usage per request do they use? "The official estimate indicates that GPT-3 consumes an order of 0.4 kWh of electricity to generate 100 pages of content, equivalent to roughly 0.004 kWh per page [11]. While no details are provided, the estimate likely considers only the GPU energy used during token generation. To account for both the prompt phase and the non-GPU energy consumption of servers, we assume a per-request server energy consumption of 0.004 kWh for our conversation task."
First off, this official estimate comes from the 2020 paper Language Models are Few-Shot Learners, which I, full disclaimer, did not read except for checking the numbers. It seems to be written by people involved with GPT-3 training mostly about something else.
Second, the 2023 paper estimate power usage for generating 150 - 300 words is as much as the official estimate for generating an entire page of text, since the official number probably doesn't include the "CPU-intensive prompt phase that processes the user’s input" or server energy consumption which isn't directly related to GPU token generation (=generating text).
On average one A4 page fits 400 - 450 words, which means they up the energy usage with 30 - 200 % (let's simplify it to doubling the energy estimate). I'm not the person to determine if this is a reasonable estimate, but if the official estimate only concerns token-generation it seems reasonable enough to increase it.
I found a blog post from october 2024 by Sean Goedecke who seems involved with AI software and critisizes the 2023 article. First, he points out that the power consumption "figure in the 2020 paper is per-page, which the 2023 paper interprets as per-request" and that "an average ChatGPT conversation is much shorter than [10-70 pages], more in the order of 3-8 messages, or 1-2 pages at most". As I wrote above, this criticism misses that the 2023 paper included other reasons for energy consumption than pure generation and concerned generated answers taking up 1/3 - 3/4 of a page.
His other point is more worthy of consideration: A lot has happened since 2020. I don't follow the technical details, but the gist seems to be that he estimates current models are ten times more energy efficient than GPT-3 and would use less water because of that. That's a good point we should consider. The 2023 estimate is probably a bit high.
So what about more recent figures? Apparently Sam Altman, CEO of OpenAI released some stats in june 2025 while writing a blog post about something else: "the average query uses about 0.34 watt-hours [...] It also uses about 0.000085 gallons [~0,32 ml] of water". This is considerably less than the 2023 figure of 7,1-47,6 (or 16,4) ml per medium query. Note that Altman did not define query length.
Is this believable? I found an article from Towards data science which discusses the probability of Altman's figures. It's in range of independent estimations of newer models' energy use but there are reasons to be sceptical. I'm borrowing this passage:
"I suspect the number only includes the energy used by the GPU servers themselves, and not the rest of the infrastructure – such as data storage, cooling systems, networking equipment, firewalls, electricity conversion loss, or backup systems. This is a common limitation in energy reporting across tech companies."
The article also question that Altman's blog post was very vague on what the numbers were based on: What is an average query? Only text or more complex multimodal tasks? Which model(s)? Does the water only include cooling or water consumption related to energy generation as well? (It definitely does not take hardware production and energy/water usage during cooling into account)
So, where does that leave us?
The 2023 paper estimates 7,1 - 47,6 (16,4) ml of water per 0,004 kWh for one query, which is likely too high (maybe it's closer to the lower estimate on average now?). Altman says 0,32 ml of water per 0,00034 kWh, which is likely too low.
I'm not sure that WUE is applicable to query-level computations since it's meant for facility-wide measurements and Altman's estimare extremely probably ignores a lot of both power and water usage, but I want to compare the numbers anyway. The 2023 paper figures would give 1,775 - 11,625 liters/kWh, and Altman's figures would give 941,2 liters/kWh. Idk what this means, depending on the data center a lot of them are probably around or under the 1,8 liter average WUE. Or not. Even Altman's estimate is a lot higher than what Amazon and Microsoft claim, which might mean AI is worse than conventional big tech. Or that the number is off due to low power estimates.
I've seen a lot of different WUE numbers floating and if AI water/energy consumption is in line with other data centers the real question might be if we could save water/energy by simply using more efficient technology (like search. but now google search adds an AI summary on top of searching which must be more energy consuming so idk. In 2009 a google search used 0.0003 kWh per query which is better than Altman's optimistic + probably slimmed estimate for chatGPT in 2025. Someone else please research more)
I don't have a definite answer to how much water a chatGPT query consumes, but what I can say is that the answer is very probable to be somewhere between 0,32 and 47,6 ml for a 150 - 300 word answer to a <800 word prompt, maybe even between 0,32 and 16,4 ml, not factoring in training and hardware manufaction. Those last numbers mean 30,5 to 1562,5 queries for a bottle of water (but again, that last number seems improbable). Now you know.
(Post is dated 30 june 2025)
7 notes · View notes
Text
How much water does chatGPT actually use?
I've been curious about the actual chatGPT water consumption numbers for a while and then I ran across this post about fact checking (op is @gloamses and reblog by @stylish-suidae, tagging you to show your post inspired me) and decided to do the research.
Tl;dr: gloamses says you should fact check things before sharing, especially when you agree with the point made. stylish-suidae tells us about the "chatGPT uses 1-3 bottles of water per query" figure and how the "fact" has undergone a game of telephone with gradual embellishments. The article linked in the OP of the post popularizing the number says 500 ml for every 5-50 queries. They provide a link on water use in data centers as a starting point for figuring out if that's a lot.
So, the reblog mentions a datacenterknowledge article which introduces water usage effectiveness (WUE) for data centers, a measure of how much water is used in a data center compared to how much energy its IT equipment uses. A lower number is good, since it's more efficient. The page also says that WUE calculations sometimes include water usage for non-cooling reasons like toilets in the building if there is no specific water meter for the cooling system.
A problem with WUE is that it doesn't account for water source (drinking water or treated wastewater which would otherwise be let out into local waters?) or if used water goes through wastewater treatment or is let out into nature if possible (the issue leading to water being let out seems to mostly be mineral buildup and the cooling process doesn't typically pollute the water).
This 2025 article by TechTarget claims average WUE is 1,8 liters/kWh. It also goes over numbers released from some big tech companies: Amazon claims 0,15 l/kWh and Microsoft 0,3 l/kWh which is a lot lower than average, though the article also says these numbers have been disputed and might be higher in reality. It also says these companies have compensation programmes for water usage, which also shows how WUE isn't everything.
How water efficient is chatGPT then?
The 500 ml of water per 5-50 queries seems to come from a 2023 paper: Making AI Less “Thirsty”: Uncovering and Addressing the
Secret Water Footprint of AI Models. This paper discusses AI water footprint in three scopes: on-site use for cooling, off-site use from electricity generation (mostly cooling of power plants) and water usage from server hardware manufacturing (which is apparently a lot).
It also differentiates between water withdrawal ("freshwater taken from the ground or surface water sources") and water consumption ("the amount of water 'evaporated, transpired, incorporated into products or crops, or otherwise removed from the immediate water environment'"). A lot of water used for cooling evaporates, but you can only reuse the water a few times before changing it to fresh water due to risk for mineral and bacteria buildup, leading to a big water consumption footprint.
They then use GPT-3 as a case study: There is no data on manufacturing water footprint, but they estimate how many requests 500 ml of water will get you if processed in different global data centers, factoring in on-site and off-site use for local conditions.
According to the paper, it's possible to get 10,5 - 70,4 requests per 500 ml of water depending on the location, which translates to 47,6 - 7,1 ml of water per request. The median is 30,5 requests per 500 ml, or 16,4 ml per request, which might be a better figure to use.
The paper summarizes this, in another place, as "GPT-3 needs to “drink” (i.e., consume) a 500ml bottle of water for roughly 10 – 50 medium-length responses, depending on when and where it is deployed". This is more queries than the 5 - 50 figure which has been cited elsewhere, cutting the maximum amount from 100 ml to 50 ml, though 16,4 ml is more realistic for comparison.
Now, what is a request in this calculation? They define it as "a medium-sized request, each with approximately ≤800 words of input and 150 – 300 words of output".
And what estimate of power usage per request do they use? "The official estimate indicates that GPT-3 consumes an order of 0.4 kWh of electricity to generate 100 pages of content, equivalent to roughly 0.004 kWh per page [11]. While no details are provided, the estimate likely considers only the GPU energy used during token generation. To account for both the prompt phase and the non-GPU energy consumption of servers, we assume a per-request server energy consumption of 0.004 kWh for our conversation task."
First off, this official estimate comes from the 2020 paper Language Models are Few-Shot Learners, which I, full disclaimer, did not read except for checking the numbers. It seems to be written by people involved with GPT-3 training mostly about something else.
Second, the 2023 paper estimate power usage for generating 150 - 300 words is as much as the official estimate for generating an entire page of text, since the official number probably doesn't include the "CPU-intensive prompt phase that processes the user’s input" or server energy consumption which isn't directly related to GPU token generation (=generating text).
On average one A4 page fits 400 - 450 words, which means they up the energy usage with 30 - 200 % (let's simplify it to doubling the energy estimate). I'm not the person to determine if this is a reasonable estimate, but if the official estimate only concerns token-generation it seems reasonable enough to increase it.
I found a blog post from october 2024 by Sean Goedecke who seems involved with AI software and critisizes the 2023 article. First, he points out that the power consumption "figure in the 2020 paper is per-page, which the 2023 paper interprets as per-request" and that "an average ChatGPT conversation is much shorter than [10-70 pages], more in the order of 3-8 messages, or 1-2 pages at most". As I wrote above, this criticism misses that the 2023 paper included other reasons for energy consumption than pure generation and concerned generated answers taking up 1/3 - 3/4 of a page.
His other point is more worthy of consideration: A lot has happened since 2020. I don't follow the technical details, but the gist seems to be that he estimates current models are ten times more energy efficient than GPT-3 and would use less water because of that. That's a good point we should consider. The 2023 estimate is probably a bit high.
So what about more recent figures? Apparently Sam Altman, CEO of OpenAI released some stats in june 2025 while writing a blog post about something else: "the average query uses about 0.34 watt-hours [...] It also uses about 0.000085 gallons [~0,32 ml] of water". This is considerably less than the 2023 figure of 7,1-47,6 (or 16,4) ml per medium query. Note that Altman did not define query length.
Is this believable? I found an article from Towards data science which discusses the probability of Altman's figures. It's in range of independent estimations of newer models' energy use but there are reasons to be sceptical. I'm borrowing this passage:
"I suspect the number only includes the energy used by the GPU servers themselves, and not the rest of the infrastructure – such as data storage, cooling systems, networking equipment, firewalls, electricity conversion loss, or backup systems. This is a common limitation in energy reporting across tech companies."
The article also question that Altman's blog post was very vague on what the numbers were based on: What is an average query? Only text or more complex multimodal tasks? Which model(s)? Does the water only include cooling or water consumption related to energy generation as well? (It definitely does not take hardware production and energy/water usage during cooling into account)
So, where does that leave us?
The 2023 paper estimates 7,1 - 47,6 (16,4) ml of water per 0,004 kWh for one query, which is likely too high (maybe it's closer to the lower estimate on average now?). Altman says 0,32 ml of water per 0,00034 kWh, which is likely too low.
I'm not sure that WUE is applicable to query-level computations since it's meant for facility-wide measurements and Altman's estimare extremely probably ignores a lot of both power and water usage, but I want to compare the numbers anyway. The 2023 paper figures would give 1,775 - 11,625 liters/kWh, and Altman's figures would give 941,2 liters/kWh. Idk what this means, depending on the data center a lot of them are probably around or under the 1,8 liter average WUE. Or not. Even Altman's estimate is a lot higher than what Amazon and Microsoft claim, which might mean AI is worse than conventional big tech. Or that the number is off due to low power estimates.
I've seen a lot of different WUE numbers floating and if AI water/energy consumption is in line with other data centers the real question might be if we could save water/energy by simply using more efficient technology (like search. but now google search adds an AI summary on top of searching which must be more energy consuming so idk. In 2009 a google search used 0.0003 kWh per query which is better than Altman's optimistic + probably slimmed estimate for chatGPT in 2025. Someone else please research more)
I don't have a definite answer to how much water a chatGPT query consumes, but what I can say is that the answer is very probable to be somewhere between 0,32 and 47,6 ml for a 150 - 300 word answer to a <800 word prompt, maybe even between 0,32 and 16,4 ml, not factoring in training and hardware manufaction. Those last numbers mean 30,5 to 1562,5 queries for a bottle of water (but again, that last number seems improbable). Now you know.
(Post is dated 30 june 2025)
7 notes · View notes
Text
For those interested in further research on chatgpt water use, I fell down a rabbit hole here after reading this post and it got long enough to be its own post
with love. you have to fact check shit. yes you. you still have to fact check shit. a lot of people are great at fact checking stuff they don’t want to be true, but somehow are still absolute ass at fact checking stuff that’s rhetorically convenient to them. even people my age, who I KNOW grew up doing internet/bibliography literacy workshops, and being warned not to believe anything that isn’t reliably sourced, people who DO harp on fact checking conservative output or whatever, are still kneejerk sharing unsourced shit that is partially or wholly untrue or misleading, because it suits whatever narrative they’re pursuing in that moment, without even a “take this with a grain of salt”. fact check!!!!!! look at the sources!!!!! yes it’s a drag!!! do it!!!!!
29K notes · View notes
Text
How much water does chatGPT actually use?
I've been curious about the actual chatGPT water consumption numbers for a while and then I ran across this post about fact checking (op is @gloamses and reblog by @stylish-suidae, tagging you to show your post inspired me) and decided to do the research.
Tl;dr: gloamses says you should fact check things before sharing, especially when you agree with the point made. stylish-suidae tells us about the "chatGPT uses 1-3 bottles of water per query" figure and how the "fact" has undergone a game of telephone with gradual embellishments. The article linked in the OP of the post popularizing the number says 500 ml for every 5-50 queries. They provide a link on water use in data centers as a starting point for figuring out if that's a lot.
So, the reblog mentions a datacenterknowledge article which introduces water usage effectiveness (WUE) for data centers, a measure of how much water is used in a data center compared to how much energy its IT equipment uses. A lower number is good, since it's more efficient. The page also says that WUE calculations sometimes include water usage for non-cooling reasons like toilets in the building if there is no specific water meter for the cooling system.
A problem with WUE is that it doesn't account for water source (drinking water or treated wastewater which would otherwise be let out into local waters?) or if used water goes through wastewater treatment or is let out into nature if possible (the issue leading to water being let out seems to mostly be mineral buildup and the cooling process doesn't typically pollute the water).
This 2025 article by TechTarget claims average WUE is 1,8 liters/kWh. It also goes over numbers released from some big tech companies: Amazon claims 0,15 l/kWh and Microsoft 0,3 l/kWh which is a lot lower than average, though the article also says these numbers have been disputed and might be higher in reality. It also says these companies have compensation programmes for water usage, which also shows how WUE isn't everything.
How water efficient is chatGPT then?
The 500 ml of water per 5-50 queries seems to come from a 2023 paper: Making AI Less “Thirsty”: Uncovering and Addressing the
Secret Water Footprint of AI Models. This paper discusses AI water footprint in three scopes: on-site use for cooling, off-site use from electricity generation (mostly cooling of power plants) and water usage from server hardware manufacturing (which is apparently a lot).
It also differentiates between water withdrawal ("freshwater taken from the ground or surface water sources") and water consumption ("the amount of water 'evaporated, transpired, incorporated into products or crops, or otherwise removed from the immediate water environment'"). A lot of water used for cooling evaporates, but you can only reuse the water a few times before changing it to fresh water due to risk for mineral and bacteria buildup, leading to a big water consumption footprint.
They then use GPT-3 as a case study: There is no data on manufacturing water footprint, but they estimate how many requests 500 ml of water will get you if processed in different global data centers, factoring in on-site and off-site use for local conditions.
According to the paper, it's possible to get 10,5 - 70,4 requests per 500 ml of water depending on the location, which translates to 47,6 - 7,1 ml of water per request. The median is 30,5 requests per 500 ml, or 16,4 ml per request, which might be a better figure to use.
The paper summarizes this, in another place, as "GPT-3 needs to “drink” (i.e., consume) a 500ml bottle of water for roughly 10 – 50 medium-length responses, depending on when and where it is deployed". This is more queries than the 5 - 50 figure which has been cited elsewhere, cutting the maximum amount from 100 ml to 50 ml, though 16,4 ml is more realistic for comparison.
Now, what is a request in this calculation? They define it as "a medium-sized request, each with approximately ≤800 words of input and 150 – 300 words of output".
And what estimate of power usage per request do they use? "The official estimate indicates that GPT-3 consumes an order of 0.4 kWh of electricity to generate 100 pages of content, equivalent to roughly 0.004 kWh per page [11]. While no details are provided, the estimate likely considers only the GPU energy used during token generation. To account for both the prompt phase and the non-GPU energy consumption of servers, we assume a per-request server energy consumption of 0.004 kWh for our conversation task."
First off, this official estimate comes from the 2020 paper Language Models are Few-Shot Learners, which I, full disclaimer, did not read except for checking the numbers. It seems to be written by people involved with GPT-3 training mostly about something else.
Second, the 2023 paper estimate power usage for generating 150 - 300 words is as much as the official estimate for generating an entire page of text, since the official number probably doesn't include the "CPU-intensive prompt phase that processes the user’s input" or server energy consumption which isn't directly related to GPU token generation (=generating text).
On average one A4 page fits 400 - 450 words, which means they up the energy usage with 30 - 200 % (let's simplify it to doubling the energy estimate). I'm not the person to determine if this is a reasonable estimate, but if the official estimate only concerns token-generation it seems reasonable enough to increase it.
I found a blog post from october 2024 by Sean Goedecke who seems involved with AI software and critisizes the 2023 article. First, he points out that the power consumption "figure in the 2020 paper is per-page, which the 2023 paper interprets as per-request" and that "an average ChatGPT conversation is much shorter than [10-70 pages], more in the order of 3-8 messages, or 1-2 pages at most". As I wrote above, this criticism misses that the 2023 paper included other reasons for energy consumption than pure generation and concerned generated answers taking up 1/3 - 3/4 of a page.
His other point is more worthy of consideration: A lot has happened since 2020. I don't follow the technical details, but the gist seems to be that he estimates current models are ten times more energy efficient than GPT-3 and would use less water because of that. That's a good point we should consider. The 2023 estimate is probably a bit high.
So what about more recent figures? Apparently Sam Altman, CEO of OpenAI released some stats in june 2025 while writing a blog post about something else: "the average query uses about 0.34 watt-hours [...] It also uses about 0.000085 gallons [~0,32 ml] of water". This is considerably less than the 2023 figure of 7,1-47,6 (or 16,4) ml per medium query. Note that Altman did not define query length.
Is this believable? I found an article from Towards data science which discusses the probability of Altman's figures. It's in range of independent estimations of newer models' energy use but there are reasons to be sceptical. I'm borrowing this passage:
"I suspect the number only includes the energy used by the GPU servers themselves, and not the rest of the infrastructure – such as data storage, cooling systems, networking equipment, firewalls, electricity conversion loss, or backup systems. This is a common limitation in energy reporting across tech companies."
The article also question that Altman's blog post was very vague on what the numbers were based on: What is an average query? Only text or more complex multimodal tasks? Which model(s)? Does the water only include cooling or water consumption related to energy generation as well? (It definitely does not take hardware production and energy/water usage during cooling into account)
So, where does that leave us?
The 2023 paper estimates 7,1 - 47,6 (16,4) ml of water per 0,004 kWh for one query, which is likely too high (maybe it's closer to the lower estimate on average now?). Altman says 0,32 ml of water per 0,00034 kWh, which is likely too low.
I'm not sure that WUE is applicable to query-level computations since it's meant for facility-wide measurements and Altman's estimare extremely probably ignores a lot of both power and water usage, but I want to compare the numbers anyway. The 2023 paper figures would give 1,775 - 11,625 liters/kWh, and Altman's figures would give 941,2 liters/kWh. Idk what this means, depending on the data center a lot of them are probably around or under the 1,8 liter average WUE. Or not. Even Altman's estimate is a lot higher than what Amazon and Microsoft claim, which might mean AI is worse than conventional big tech. Or that the number is off due to low power estimates.
I've seen a lot of different WUE numbers floating and if AI water/energy consumption is in line with other data centers the real question might be if we could save water/energy by simply using more efficient technology (like search. but now google search adds an AI summary on top of searching which must be more energy consuming so idk. In 2009 a google search used 0.0003 kWh per query which is better than Altman's optimistic + probably slimmed estimate for chatGPT in 2025. Someone else please research more)
I don't have a definite answer to how much water a chatGPT query consumes, but what I can say is that the answer is very probable to be somewhere between 0,32 and 47,6 ml for a 150 - 300 word answer to a <800 word prompt, maybe even between 0,32 and 16,4 ml, not factoring in training and hardware manufaction. Those last numbers mean 30,5 to 1562,5 queries for a bottle of water (but again, that last number seems improbable). Now you know.
(Post is dated 30 june 2025)
7 notes · View notes
Text
At the gate for my flight home from visiting friends and there's a woman here with a service Shiba Inu. No pics because he has a Do Not Disturb vest and taking pics of strangers is illegal but I need to stress how ON DUTY this animal is. Ears up. Eyes doing Lazer scans of everything. Examining everyone who passes within 10ft like a security guard. Ass planted on her feet. I have never seen a dog with such intense chivalric guardian energy before. He has tiny eyebrows and they are FURROWED with concentration.
186K notes · View notes
Text
Hey did y'all ever think about that if doctors blame all fat people's medical issues on them being too fat without proper investigation and then feel justified in neglecting their medical concerns, then statistically more fat people WILL develop and potentially die from serious health issues and it might not actually be because of the fat when everything comes down to it
31K notes · View notes
Text
After mistakenly transfering to a high school for monsters, you try to tell them that you’re just a regular human. However, because of bizarre coincidences creating apparently inexplicable situations, the school population and teachers come to believe you’re actually an Eldritch Abomination
25K notes · View notes
Text
I think we also need to consider the preparatory cost before you get to the crafting stage as well. This may vary depending on the project, but it's an important factor.
Take tablet weaving: It takes time, patience and lots of thread detangling to set up, but once that part is done you get a lot of crafting per craft.
Sewing has a big upfront cost with measuring, cutting, pinning, ironing etc to do before you sew even one seam, and then it's also low crafting per craft. It's not my favourite (still fun though) and I think this post managed to articulate why
Crocheting has a low upfront cost, usually you can start right away (or after looking up a pattern. Some people make and wash test squares as well, but that's too big of an upfront cost to me so I usually don't). Crafting per craftness varies: A sweater is a lot of crafting per craft, while amirigumi contains more sewing, embroidery and fastening loose ends
For me, having a lot of non-craft to do at the start of a project before I can even do any amount of craft makes it much harder to start a project, or might make me tire of it before really beginning because there's so much to do before the good part
Pioneering a "how much craft per craft" scale that determines how much of the time you spend doing any given handcraft is actually spent on what a layperson would imagine the core of the craft is vs other associated tasks. Spinning? Mostly actual spinning. Sewing? Mostly ironing. Wood and metalworking? Mostly sanding. Weaving? I've only had a chance to do one project, but from what I can gather from my more experienced friends, it seems to be mostly math.
8K notes · View notes
Text
I finished reading The Lord of the Rings for the first time in my life. With all of *vague gesture at everything* this going on.
I Am Not Okay
25K notes · View notes
Text
btw "i hate this entire group of people because one of them abused me" will literally never be a good argument. ever
im sorry you were abused and that sucks and shouldnt have happened. but you cant use that as justification to mistreat other people, either.
185 notes · View notes
Text
Ugandan entrepreneurs Sandra Namboozo (26) and Samuel Muyita (28) have scooped both the People’s Choice and Community Healers awards at the prestigious 2025 Young Inventors Prize, organised by the European Patent Office (EPO). The duo was recognised for their groundbreaking innovation, Karpolax—a biodegradable sachet that preserves fruit freshness without refrigeration. The invention impressed both the public and an independent jury. The sachet, placed inside fruit packaging, emits a blend of plant-based volatile organic compounds derived from cloves, lemongrass, eucalyptus, and wintergreen. These compounds suppress ethylene emissions to slow ripening and protect against mould, fungi, and bacteria.
Tumblr media
In pilot trials conducted with the National Agricultural Research Organisation (NARO), Karpolax extended the shelf life of mangoes from 11 to 33 days. The technology has also been tested successfully on bananas, apples, and oranges.
The innovation offers a critical solution for smallholder farmers in sub-Saharan Africa, where cold storage facilities are often unavailable or unaffordable.
Since its launch in 2020, Karpolax has supported over 100 farmers, 20 exporters, and 250 vendors, helping prevent the spoilage of more than one million tonnes of fruit. The sachets are now being adapted for use with pineapples, capsicum, and berries.
“One sachet can preserve a five-kilogram box of fruit for up to 30 days, after which its active ingredients are used up,” Namboozo explained.
2K notes · View notes
Text
124K notes · View notes