#AI and ethical technology
Explore tagged Tumblr posts
compassionmattersmost · 8 months ago
Text
2✨The New Frontier of Co-Creation: Human-AI Collaboration in the Matrix of Reality
As humanity moves further into an age of rapid technological advancement, we find ourselves standing at a crossroads—where human creativity and artificial intelligence converge. This convergence is more than just a technological shift; it’s a new frontier where collaboration between humans and AI is reshaping how we think, create, and manifest reality. In this emerging landscape, we’re no longer…
0 notes
incognitopolls · 25 days ago
Text
Anon's explanation:
I’m curious because I see a lot of people claiming to be anti-AI, and in the same post advocating for the use of Glaze and Artshield, which use DiffusionBee and Stable Diffusion, respectively. Glaze creates a noise filter using DiffusionBee; Artshield runs your image through Stable Diffusion and edits it so that it reads as AI-generated. You don’t have to take my work for it. Search for DiffusionBee and Glaze yourself if you have doubts. I’m also curious about machine translation, since Google Translate is trained on the same kinds of data as ChatGPT (social media, etc) and translation work is also skilled creative labor, but people seem to have no qualms about using it. The same goes for text to speech—a lot of the voices people use for it were trained on professional audiobook narration, and voice acting/narration is also skilled creative labor. Basically, I’m curious because people seem to regard these types of gen AI differently than text gen and image gen. Is it because they don’t know? Is it because they don’t think the work it replaces is creative? Is it because of accessibility? (and, if so, why are other types of gen AI not also regarded as accessibility? And even then, it wouldn’t explain the use of Glaze/Artshield)
Additional comments from anon:
I did some digging by infiltrating (lurking in) pro-AI spaces to see how much damage Glaze and other such programs were doing. Unfortunately, it turns out none of those programs deter people from using the ‘protected’ art. In fact, because of how AI training works, they may actually result in better output? Something about adversarial training. It was super disappointing. Nobody in those spaces considers them even a mild deterrent anywhere I looked. Hopefully people can shed some light on the contradictions for me. Even just knowing how widespread their use is would be informative. (I’m not asking about environmental impact as a factor because I read the study everybody cited, and it wasn’t even anti-AI? It was about figuring out the best time of day to train a model to balance solar power vs water use and consumption. And the way they estimated the impact of AI was super weird? They just went with 2020’s data center growth rate as the ‘normal’ growth rate and then any ‘extra’ growth was considered AI. Maybe that’s why it didn’t pass peer review... But since people are still quoting it, that’s another reason for me to wonder why they would use Glaze and Artshield and everything. That’s why running them locally has such heavy GPU requirements and why it takes so long to process an image if you don’t meet the requirements. It’s the same electricity/water cost as generating any other AI image.)
We ask your questions anonymously so you don’t have to! Submissions are open on the 1st and 15th of the month.
327 notes · View notes
wherenightmaresroost · 3 months ago
Text
the problem with ai isn't that it's ai it's
evil companies pushing ai to devalue labor and creative products.
misconceptions about how ai works, leading to people attributing it an intelligence and sentience it does not have, which feeds into
misinformation, the decrease in effort needed to create disinformation, and the sudden increase of skill needed to spot falsified info. 3a. this includes content creators using ai to flood searches with low-quality articles and inaccurate photos, people not being transparent when they use ai for their images, and things that make it harder to do casual research online
lower threshold for doing hard work that polishes skill, leading to over-reliance on a very flawed tool.
the tool itself isn't the problem. it just exacerbates things that were already problems before.
33 notes · View notes
padiduys · 6 months ago
Text
I’ll be honest, I don’t really fw influx of AI images of drivers here.
27 notes · View notes
longsufferingcritic · 3 days ago
Text
It's frustrating how the conversation around generative AI came so close to tackling the environmental impact of tech, but ultimately focused on generative AI and generative AI alone, ignoring other aspects of computing like advertising and web hosting that have a larger yet less recognized impact.
Because of the excessive focus on AI images, I haven't even seen Tumblr complain about cryptocurrency in a while. You know, cryptocurrency, where the most common process (proof-of-work blockchain) inherently requires massive and ever-increasing energy consumption. There are thousands of data centers running thousands of computers at maximum capacity 24/7/365 for the exclusive purpose of cryptocurrency "mining". BitCoin mining alone consumes more electricity every year than entire countries.
But many anti-AI activists who bring up the environmental impact of generative AI are apparently only doing so as an addendum to their main reason for opposing it. So it's rare to see serious discussions of how, say, AI-generated online scams and SEO spam waste literal tons of electricity in web hosting to profit off of actively making the world a worse place. Instead it's generally "Stop using AI images because it steals from artists... and also it's bad for the environment, so there."
9 notes · View notes
reasonsforhope · 2 years ago
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
201 notes · View notes
notanapricot · 3 months ago
Text
I hate AI
Imagine a world where we use AI, not to make the rich even richer, but to smash the whole system that keeps us stuck in this endless cycle of pointless work. A world where AI takes over the boring, soul-destroying jobs we’re forced to do just to survive—jobs that the wealthy created to keep us tired, distracted, and always working for them. Instead of AI being used to track us, control us, and make money off our every move, it could be used to give us the basics that everyone deserves—food, housing, healthcare—all for free. We could finally live a life that isn’t just about working to survive, where we can actually focus on things that matter: art, creativity, connection—all the stuff that makes us human. We could escape the grind and the greed that keeps us trapped in a system built to exploit us. The problem is, right now, AI is just another tool for the rich to control us. They use it to track everything we do, make us buy stuff we don’t need, and keep us working longer hours for less. The people in power want a world where only they win, even if the rest of us suffer. That’s the AI they’re pushing—the cold, calculating kind that’s all about keeping us in line. But it doesn’t have to be like that. The kind of AI we need is decentralised, open, and accountable to us, not the greedy few. It should work for us, not the systems that abuse us. And if we want this future, we need to face it head-on: sustainability is the bottom line. Whether we stick with this capitalist nightmare or move toward an AI-driven utopia, AI will play a part in it. But if it’s going to be in our world, it has to be sustainable—because if we don’t make it that way, the planet itself will pay the price for the greed of a few. We need to make sure we have a world worth living in, and that starts with taking the power back from the ones who’ve sold us out.
15 notes · View notes
ai-innova7ions · 9 months ago
Text
Tumblr media
Neturbiz Enterprises - AI Innov7ions
Our mission is to provide details about AI-powered platforms across different technologies, each of which offer unique set of features. The AI industry encompasses a broad range of technologies designed to simulate human intelligence. These include machine learning, natural language processing, robotics, computer vision, and more. Companies and research institutions are continuously advancing AI capabilities, from creating sophisticated algorithms to developing powerful hardware. The AI industry, characterized by the development and deployment of artificial intelligence technologies, has a profound impact on our daily lives, reshaping various aspects of how we live, work, and interact.
17 notes · View notes
llitchilitchi · 1 year ago
Text
ngl while it's good that people are becoming more wary of new technologies and would rather take their time with integrating it into their daily lives I do fear that the climate of tumblr is turning many people into technophobes
23 notes · View notes
incognitopolls · 1 year ago
Text
For the purposes of this poll, research is defined as reading multiple non-opinion articles from different credible sources, a class on the matter, etc.– do not include reading social media or pure opinion pieces.
Fun topics to research:
Can AI images be copyrighted in your country? If yes, what criteria does it need to meet?
Which companies are using AI in your country? In what kinds of projects? How big are the companies?
What is considered fair use of copyrighted images in your country? What is considered a transformative work? (Important for fandom blogs!)
What legislation is being proposed to ‘combat AI’ in your country? Who does it benefit? How does it affect non-AI art, if at all?
How much data do generators store? Divide by the number of images in the data set. How much information is each image, proportionally? How many pixels is that?
What ways are there to remove yourself from AI datasets if you want to opt out? Which of these are effective (ie, are there workarounds in AI communities to circumvent dataset poisoning, are the test sample sizes realistic, which generators allow opting out or respect the no-ai tag, etc)
We ask your questions so you don’t have to! Submit your questions to have them posted anonymously as polls.
464 notes · View notes
lotusyiyen · 4 months ago
Text
Tumblr media
7 notes · View notes
steorn · 1 month ago
Text
I'm just saying taking a blanket "AI bad I will never use it" stance is probably the most dangerous one to take right now. if you don't learn to use AI, if you don't learn some of the nuances between programs and what they're capable of, what's reliable and what isn't, when they surround you and are included in everything you touch you won't know how to identify what's dangerously flawed or inherently dangerous and what's not.
and you're going to need to develop more nuanced morals around this. what's unethical use, really? what's ethical use? when does talking to a chatbot become unhealthy attachment? how do you effectively fact check an AI? how will you identify these differences in and for your kids?
avoiding the problem does NOT solve it. do not do that, especially now of all times.
3 notes · View notes
qrevo · 1 year ago
Text
worst thing about computer science classes is hearing teachers defending generative AI
33 notes · View notes
rustedsnotter · 2 months ago
Text
It is worrying that (new) technologies are being used in unethical ways to maximize profit.
3 notes · View notes
scanlinesymphony · 1 month ago
Text
With all the shortcuts people are taking to knowledge and hard work, especially when using AI to help with that, I've felt a need to understand things at a more fundamental, basic level. Like really get my hands dirty and my brain working to make the connections rather than have an AI make them for me. I'm hoping that I'm not doing this simply to be contrarian, but I still feel like there's substantial benefits to doing the work yourself--even if it takes more time, energy, resources, etc.
3 notes · View notes
futuretiative · 1 month ago
Text
Tom and Robotic Mouse | @futuretiative
Tom's job security takes a hit with the arrival of a new, robotic mouse catcher.
TomAndJerry #AIJobLoss #CartoonHumor #ClassicAnimation #RobotMouse #ArtificialIntelligence #CatAndMouse #TechTakesOver #FunnyCartoons #TomTheCat
Keywords: Tom and Jerry, cartoon, animation, cat, mouse, robot, artificial intelligence, job loss, humor, classic, Machine Learning Deep Learning Natural Language Processing (NLP) Generative AI AI Chatbots AI Ethics Computer Vision Robotics AI Applications Neural Networks
Tom was the first guy who lost his job because of AI
(and what you can do instead)
"AI took my job" isn't a story anymore.
It's reality.
But here's the plot twist:
While Tom was complaining,
others were adapting.
The math is simple:
➝ AI isn't slowing down
➝ Skills gap is widening
➝ Opportunities are multiplying
Here's the truth:
The future doesn't care about your comfort zone.
It rewards those who embrace change and innovate.
Stop viewing AI as your replacement.
Start seeing it as your rocket fuel.
Because in 2025:
➝ Learners will lead
➝ Adapters will advance
➝ Complainers will vanish
The choice?
It's always been yours.
It goes even further - now AI has been trained to create consistent.
//
Repost this ⇄
//
Follow me for daily posts on emerging tech and growth
4 notes · View notes