#AI and human rights
Explore tagged Tumblr posts
Text
Are rights sufficiently human in the age of the machine?
Speech by Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice in England and Wales
1 note
·
View note
Text
Ethical Dilemmas in AI Warfare: A Case for Regulation
Introduction: The Ethical Quandaries of AI in Warfare
As artificial intelligence (AI) continues to evolve, its application in warfare presents unprecedented ethical dilemmas. The use of AI-driven autonomous weapon systems (AWS) and other military AI technologies blurs the line between human control and machine decision-making. This raises concerns about accountability, the distinction between combatants and civilians, and compliance with international humanitarian laws (IHL). In response, several international efforts are underway to regulate AI in warfare, yet nations like India and China exhibit different approaches to AI governance in military contexts.
International Efforts to Regulate AI in Conflict
Global bodies, such as the United Nations, have initiated discussions around the development and regulation of Lethal Autonomous Weapon Systems (LAWS). The Convention on Certain Conventional Weapons (CCW), which focuses on banning inhumane and indiscriminate weapons, has seen significant debate over LAWS. However, despite growing concern, no binding agreement has been reached on the use of autonomous weapons. While many nations push for "meaningful human control" over AI systems in warfare, there remains a lack of consensus on how to implement such controls effectively.
The ethical concerns of deploying AI in warfare revolve around three main principles: the ability of machines to distinguish between combatants and civilians (Principle of Distinction), proportionality in attacks, and accountability for violations of IHL. Without clear regulations, these ethical dilemmas remain unresolved, posing risks to both human rights and global security.
India and China’s Positions on International AI Governance
India’s Approach: Ethical and Inclusive AI
India has advocated for responsible AI development, stressing the need for ethical frameworks that prioritize human rights and international norms. As a founding member of the Global Partnership on Artificial Intelligence (GPAI), India has aligned itself with nations that promote responsible AI grounded in transparency, diversity, and inclusivity. India's stance in international forums has been cautious, emphasizing the need for human control in military AI applications and adherence to international laws like the Geneva Conventions. India’s approach aims to balance AI development with a focus on protecting individual privacy and upholding ethical standards.
However, India’s military applications of AI are still in the early stages of development, and while India participates in the dialogue on LAWS, it has not committed to a clear regulatory framework for AI in warfare. India's involvement in global governance forums like the GPAI reflects its intent to play an active role in shaping international standards, yet its domestic capabilities and AI readiness in the defense sector need further strengthening.
China’s Approach: AI for Strategic Dominance
In contrast, China’s AI strategy is driven by its pursuit of global dominance in technology and military power. China's "New Generation Artificial Intelligence Development Plan" (2017) explicitly calls for integrating AI across all sectors, including the military. This includes the development of autonomous systems that enhance China's military capabilities in surveillance, cyber warfare, and autonomous weapons. China's approach to AI governance emphasizes national security and technological leadership, with significant state investment in AI research, especially in defense.
While China participates in international AI discussions, it has been more reluctant to commit to restrictive regulations on LAWS. China's participation in forums like the ISO/IEC Joint Technical Committee for AI standards reveals its intent to influence international AI governance in ways that align with its strategic interests. China's reluctance to adopt stringent ethical constraints on military AI reflects its broader ambitions of using AI to achieve technological superiority, even if it means bypassing some of the ethical concerns raised by other nations.
The Need for Global AI Regulations in Warfare
The divergence between India and China’s positions underscores the complexities of establishing a universal framework for AI governance in military contexts. While India pushes for ethical AI, China's approach highlights the tension between technological advancement and ethical oversight. The risk of unregulated AI in warfare lies in the potential for escalation, as autonomous systems can make decisions faster than humans, increasing the risk of unintended conflicts.
International efforts, such as the CCW discussions, must reconcile these differing national interests while prioritizing global security. A comprehensive regulatory framework that ensures meaningful human control over AI systems, transparency in decision-making, and accountability for violations of international laws is essential to mitigate the ethical risks posed by military AI.
Conclusion
The ethical dilemmas surrounding AI in warfare are vast, ranging from concerns about human accountability to the potential for indiscriminate violence. India’s cautious and ethical approach contrasts sharply with China’s strategic, technology-driven ambitions. The global community must work towards creating binding regulations that reflect both the ethical considerations and the realities of AI-driven military advancements. Only through comprehensive international cooperation can the risks of AI warfare be effectively managed and minimized.
#AI ethics#AI in warfare#Autonomous weapons#Military AI#AI regulation#Ethical AI#Lethal autonomous weapons#AI accountability#International humanitarian law#AI and global security#India AI strategy#China AI strategy#AI governance#UN AI regulation#AI and human rights#Global AI regulations#Military technology#AI-driven conflict#Responsible AI#AI and international law
0 notes
Text
i have chronic pain. i am neurodivergent. i understand - deeply - the allure of a "quick fix" like AI. i also just grew up in a different time. we have been warned about this.
15 entire years ago i heard about this. in my forensics class in high school, we watched a documentary about how AI-based "crime solving" software was inevitably biased against people of color.
my teacher stressed that AI is like a book: when someone writes it, some part of the author will remain within the result. the internet existed but not as loudly at that point - we didn't know that AI would be able to teach itself off already-biased Reddit threads. i googled it: yes, this bias is still happening. yes, it's just as bad if not worse.
i can't actually stop you. if you wanna use ChatGPT to slide through your classes, that's on you. it's your money and it's your time. you will spend none of it thinking, you will learn nothing, and, in college, you will piss away hundreds of thousands of dollars. you will stand at the podium having done nothing, accomplished nothing. a cold and bitter pyrrhic victory.
i'm not even sure students actually read the essays or summaries or emails they have ChatGPT pump out. i think it just flows over them and they use the first answer they get. my brother teaches engineering - he recently got fifty-three copies of almost-the-exact-same lab reports. no one had even changed the wording.
and yes: AI itself (as a concept and practice) isn't always evil. there's AI that can help detect cancer, for example. and yet: when i ask my students if they'd be okay with a doctor that learned from AI, many of them balk. it is one thing if they don't read their engineering textbook or if they don't write the critical-thinking essay. it's another when it starts to affect them. they know it's wrong for AI to broad-spectrum deny insurance claims, but they swear their use of AI is different.
there's a strange desire to sort of divorce real-world AI malpractice over "personal use". for example, is it moral to use AI to write your cover letters? cover letters are essentially only templates, and besides: AI is going to be reading your job app, so isn't it kind of fair?
i recently found out that people use AI as a romantic or sexual partner. it seems like teenagers particularly enjoy this connection, and this is one of those "sticky" moments as a teacher. honestly - you can roast me for this - but if it was an actually-safe AI, i think teenagers exploring their sexuality with a fake partner is amazing. it prevents them from making permanent mistakes, it can teach them about their bodies and their desires, and it can help their confidence. but the problem is that it's not safe. there isn't a well-educated, sensitive AI specifically to help teens explore their hormones. it's just internet-fed cycle. who knows what they're learning. who knows what misinformation they're getting.
the most common pushback i get involves therapy. none of us have access to the therapist of our dreams - it's expensive, elusive, and involves an annoying amount of insurance claims. someone once asked me: are you going to be mad when AI saves someone's life?
therapists are not just trained on the book, they're trained on patient management and helping you see things you don't see yourself. part of it will involve discomfort. i don't know that AI is ever going to be able to analyze the words you feed it and answer with a mind towards the "whole person" writing those words. but also - if it keeps/kept you alive, i'm not a purist. i've done terrible things to myself when i was at rock bottom. in an emergency, we kind of forgive the seatbelt for leaving bruises. it's just that chat shouldn't be your only form of self-care and recovery.
and i worry that the influence chat has is expanding. more and more i see people use chat for the smallest, most easily-navigated situations. and i can't like, make you worry about that in your own life. i often think about how easy it was for social media to take over all my time - how i can't have a tiktok because i spend hours on it. i don't want that to happen with chat. i want to enjoy thinking. i want to enjoy writing. i want to be here. i've already really been struggling to put the phone down. this feels like another way to get you to pick the phone up.
the other day, i was frustrated by a book i was reading. it's far in the series and is about a character i resent. i googled if i had to read it, or if it was one of those "in between" books that don't actually affect the plot (you know, one of those ".5" books). someone said something that really stuck with me - theoretically you're reading this series for enjoyment, so while you don't actually have to read it, one would assume you want to read it.
i am watching a generation of people learn they don't have to read the thing in their hand. and it is kind of a strange sort of doom that comes over me: i read because it's genuinely fun. i learn because even though it's hard, it feels good. i try because it makes me happy to try. and i'm watching a generation of people all lay down and say: but i don't want to try.
#spilled ink#i do also think this issue IS more complicated than it appears#if a teacher uses AI to grade why write the essay for example.#<- while i don't agree (the answer is bc the essay is so YOU learn) i would be RIPSHIT as a student#if i found that out.#but why not give AI your job apps? it's not like a human person SEES your applications#the world IS automating in certain ways - i do actually understand the frustration#some people feel where it's like - i'm doing work here. the work will be eaten by AI. what's the point#but the answer is that we just don't have a balance right now. it just isn't trained in a smart careful way#idk. i am pretty anti AI tho so . much like AI. i'm biased.#(by the way being able to argue the other side tells u i actually understand the situation)#(if u see me arguing "pro-chat'' it's just bc i think a good argument involves a rebuttal lol)#i do not use ai . hard stop.
4K notes
·
View notes
Text
Google is hiding the truth, and playing with words to change the facts, and to change the truth about what happened in Nuseirat camp, it's literally censoring the Nuseirat Massacre.
#gaza#free gaza#palestine#free palestine#gaza genocide#from the river to the sea palestine will be free#do not stop talking about palestine#don't stop talking about palestine#فلسطين#google#technology#tech#ai#palestine genocide#nuseirat refugee camp#nuseirat massacre#current events#human rights
2K notes
·
View notes
Text
Our Daily Life in Gaza 💔
We search for water, firewood, food, electricity, we spend our day searching for the basics of life and trying to survive !! 😔
I'm talking to the human in your heart,, Please, Help Save My Family To Survive 🙏💔
Vetted By @90-ghost , @riding-with-the-wild-hunt ✅
Every Donation, No Matter How Small, it Really Helps 😭
If you think we are joking about our lives, look away, but don't forget that we are human..🥀
#free gaza#free palestine#gaza strip#human rights#humanity#artists on tumblr#save gaza#gaza#save palestine#i stand with palestine#all eyes on palestine#free rafah#all eyes on rafah#rafah#mutual aid#palestine aid#humanitarian aid#ai digital art#adult human female#deadpool and wolverine#save us#politics#us politics#kamala harris#vote kamala#october#september#history#world#people
1K notes
·
View notes
Text
You know what makes me the most upset about the use of AI in our culture? It's not just removing artists from art or devaluing human creativity -- it's treating people like they're disposable.
Oh, you're not that special. We have computers to do that now. If you died tomorrow, we have your image. We have your voice. We have your biometric data. We can just duplicate you, it's no problem. Who needs flesh and blood? Who needs agency and free thought? Who needs the human soul? You're just a tool. And when we're done with you, we'll just toss you aside and find someone else.
Creatives, listen to me, and listen to me good: you have a voice and it matters. There is no one in the history of the world who is exactly like you, in this time or this place. There is no one who thinks like you, acts like you, speaks like you, moves like you. There is nobody else built like you. Nobody else with your unique experiences and outlook of the world. You are a product of history, of culture, of art, of love, of pain, of possibility. Don't let them take that from you.
#WGA strike#sag aftra#AI#capitalism#human creativity#human rights#motivation#SAG-AFTRA strike#writers#artists#storytellers#fanfiction#mental health#self-love#self-affirmation#Hollywood#i stand with the wga
8K notes
·
View notes
Text

ARTISTS, STOP POSTING TO DEVIANT ART
(Gen Ai/BDS)
Learned today that DeviantArt is owned by Wix, an Israeli company listed under boycott by the BDS Palestinian human rights organization.

Under Wix, who acquired DeviantArt in 2017, DV has been pushing Israeli occupation propaganda and allowing Generative Ai to completely takeover the platform and be sold on it.
It’s clear that under Wix, DeviantArt DOES NOT CARE whatsoever about the art or the artists it was originally created to cater towards. It only cares for profit.
Even if you are not well-versed on the current politics surrounding the Israeli Occupation and the erasure of Palestine for some reason, everyone can agree that Wix has changed DeviantArt for the worse and the best case scenario is that they sell it to someone who actually cares for art, not profit.
I absolutely adore to death so much of the art there, but for now I will stop posting my art there and I suggest that other artists do the same. They DO NOT DESERVE YOUR ART.
Please transition towards using alternatives like Tumblr, Insta, Twitter, and Newgrounds. Newgrounds especially is the best alternative to DeviantArt. Please suggest other alternatives as well.

#artists#tumblr#palestine#bds movement#human rights#artists on tumblr#wix#shut it down for palestine#free palestine#Newgrounds#deviantart#deviant art#gaza#art#tag#Phineas and Ferb#for the moots#ai#generative ai#artists for palestine
1K notes
·
View notes
Text
#democracy#vote democrat#election 2024#vote blue#voting#progressive#pro choice#diversity#equality#never trump#human rights#environment#Harris/Walz#ai generated
640 notes
·
View notes
Text


Mainstream sci-fi loves to insist on having drab looking machines as tools of war and oppression almost as a self fulfilling prophecy. But what if, hear me out, we started considering a future with more humane AI and healthier relations to different modes of intelligence (ie. the entire non-human being population of Earth)? 🤔
#artists on tumblr#solarpunk#illustration#character design#robot#I know the answer is ‘capitalism’ but without hope we wouldn’t have had other social rights movements nor the writer strikes happening now#there are SO MANY useful and amazing ways to work with ai yet they choose to essentially be rid of the one thing that makes us human#oc art
3K notes
·
View notes
Text

Never let his legacy die.
#Never let his legacy die.#luigi mangione#free luigi#mario and luigi#luigi thoughts#luigi fanart#class war#legacy#human rights#161#1312#usa#america#usa is a terrorist state#usa is funding genocide#ausgov#politas#auspol#tasgov#taspol#australia#fuck neoliberals#neoliberal capitalism#anthony albanese#albanese government#anti capitalism#anti ai#antifascist#antiauthoritarian#fuck ceos
192 notes
·
View notes
Text

#artists on tumblr#ai artwork#pastoral#a look#faun/human hybrid#digital paint work to achieve double knee joint and deformed vertabrae#because AI is still kind of strangely stupid#can't count#doesn't know directions- left right up down#nor how many fingers humans actually have
670 notes
·
View notes
Text
I was overcome with the need to make this for some reason.
#gale#gale dekarios#gale of waterdeep#trans#bg3#baldurs gate 3#fanart#no ai!#transgender#trans rights are human rights babey!!!!
55 notes
·
View notes
Text
If you’re in the UK and a creative, it is vital that you read and respond to this public survey about copyright law and AI training.
AI technology must be very carefully regulated, to ensure public safety and to maintain the rights of creators. This consultation seems to favour AI companies, suggesting to give them a broad copyright exception, almost penalising creators, putting the burden on them to opt-out.
Below is a set of useful resources for responding to the survey, your MP, and your representatives. It closes at 23:59 on 25 February 2025 so we have 10 weeks to do it - LET’S GO
Resources by Ed Newton-Rex on BlueSky:

Find your MP: https://members.parliament.uk/members/Commons
Template letter to send to your MP: https://docs.google.com/document/d/1XtqaGRLcs6o4F9maphl8TRM6BDwZkNi-553Ky7H6FKQ
Template letter to send to your representatives: https://docs.google.com/document/d/1VTY6TkiOPF9Xc9AUMn7TzW8tGOR2dJ0euhbDrQQTu1I
Template letter to respond to the consultation: https://docs.google.com/document/d/12wpfkBnCZPJpVqch1pz3U4VfQ1sDlDJek3my6CqIfCk
Email address for consultation response: [email protected].
#Ai#Ai consultation#ai art is stolen art#ai art is not art#ai art is art theft#illustration#artist#creative#your rights#humans not Ai
132 notes
·
View notes
Text
“how are you anti ai but like dbh? did you even play the game?”
did you play the game? genuine question, how many of you have played dbh and the lesson you learned was “we need to embrace ai” because that is absolutely not what it’s about.
humans are the ones responsible for the sentience of androids. they’re the ones responsible for their slavery and creation. they’re the ones who made androids to serve them, to make their life easier. and when they fought back they regretted funding their creation. because now, their exploitation, previously aimed at humans, can’t be justified anymore.
people like ai because it allows them to be lazy, carefree. you don’t have to learn how to draw, you don’t need to refine your tools or your your art style when you can just ask a program to generate a piece for you. you don’t need to learn how to write, come up with prompts, spend years finding your style and fixing your vocabulary, go through phases of horrible and cringeworthy writing, because guess what? you can ask chatgpt to write it for you.
and when corporations discover that they will use it to their advantage, replacing humans with ai. so 30 years down the line, when a machine enters your work force, does your job 10x better than you and lands you homeless, of fucking course you’re going to be angry and android hating.
the issue that dbh addresses is (in that universe) blaming sentient ai for the evil that corporations commit. again, they created ai, they created it so that it has the possibility of being sentient, using it to do jobs no one wants to do, take it even further and make them do jobs (arguably) to replace marginalised people who need those jobs. so the “bad guy” in dbh aren’t the rightfully angry citizens, who have no concept or understanding of deviancy, and it’s not androids either, it’s fucking elijah kamski. and all the other fuckers at the top. they create infighting between workers to distract from class differences.
if ai became sentient it’d absolutely be morally wrong to mistreat them, because they have consciousness and emotions. being anti ai is being against narrow and generative ai which is 1. bad for the environment 2. is theft!! not fucking hypothetical robots who possibly have feelings. improve your media literacy people.
#detroit become human#dbh#i hate ai#ai#anti ai#chatgpt#rant post#kinda serving gavin reed realness right now
63 notes
·
View notes
Text
Sweet reward for obeying commands
#THIS PIECE WAS SPONSORED BY THE HOURS OF RESEARCH I DID ON HUMAN HORMONE AND THEIR EFFECTS AND THE 0 REFERENCES I USED FOR THE SERVER TOWERS#ghosts art#SAYER#SAYER podcast#SAYER ai#jacob hale#sayerhale#well kids. lets analyze this piece. what did the author (me) mean by this?#anyways uh this is. something ive drawn. and made. and posted.#if you are thinking to yourself ''this looks intimate'' then yes. you are right.#i also feel like i owe a hand written apology letter to everyone who works in IT. i am NOT a technology nerd. but i AM a human antomy nerd.#anyways!!! role swap!!!!! its now Hale prodding around SAYER's ''brain'' instead of the opposite :)#the filename for this is SAYER_sayerhale objectum nonsense . just a fun silly fact for everyone#because i am aroace towards humans. but i am also very much objectum#i dont know if ill have the balls to post this in the official server . but enjoy#objectum#<- dont worry about it#ALSO PLEASEEEEEE ZOOM IN ON THE DETAILS. IM BEGGING YOU.#MY ASS DID NOT SPEND HOURS DOING ALL THAT RESEARCH FOR NOTHING!!!!#''what exactly are hale's biometric readings telling us about'' decide that for yourself . up for interpretation.
147 notes
·
View notes
Text
Cynical prediction (non-election-related):
One big eventual consequence of widespread genAI implementation is going to be the revelation that a lot of things we assumed were checked by more than one person before they went out to the general public never actually were, not in the detail you'd hope for at least - we've actually been relying on the diligence and competence of individuals who were not actually tested on or rewarded for displaying this.
#tbh even aside from AI my experiences working in healthcare informatics has shown me that this is true#the subtext here is that AI generated stuff seems to be incredibly good at passing the initial sniff test#it doesn't make the same sort of mistakes that humans do#you have to actually put a lot more time and effort into checking that it's been done right than the level of overview that's become stndard#due to staff time crunch
129 notes
·
View notes