#AI and human rights
Explore tagged Tumblr posts
dimitrisxenos · 5 months ago
Text
Are rights sufficiently human in the age of the machine?
Speech by Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice in England and Wales
1 note · View note
code-of-conflict · 8 months ago
Text
Ethical Dilemmas in AI Warfare: A Case for Regulation
Introduction: The Ethical Quandaries of AI in Warfare
As artificial intelligence (AI) continues to evolve, its application in warfare presents unprecedented ethical dilemmas. The use of AI-driven autonomous weapon systems (AWS) and other military AI technologies blurs the line between human control and machine decision-making. This raises concerns about accountability, the distinction between combatants and civilians, and compliance with international humanitarian laws (IHL). In response, several international efforts are underway to regulate AI in warfare, yet nations like India and China exhibit different approaches to AI governance in military contexts.
International Efforts to Regulate AI in Conflict
Global bodies, such as the United Nations, have initiated discussions around the development and regulation of Lethal Autonomous Weapon Systems (LAWS). The Convention on Certain Conventional Weapons (CCW), which focuses on banning inhumane and indiscriminate weapons, has seen significant debate over LAWS​. However, despite growing concern, no binding agreement has been reached on the use of autonomous weapons. While many nations push for "meaningful human control" over AI systems in warfare, there remains a lack of consensus on how to implement such controls effectively​.
The ethical concerns of deploying AI in warfare revolve around three main principles: the ability of machines to distinguish between combatants and civilians (Principle of Distinction), proportionality in attacks, and accountability for violations of IHL. Without clear regulations, these ethical dilemmas remain unresolved, posing risks to both human rights and global security.
India and China’s Positions on International AI Governance
India’s Approach: Ethical and Inclusive AI
India has advocated for responsible AI development, stressing the need for ethical frameworks that prioritize human rights and international norms. As a founding member of the Global Partnership on Artificial Intelligence (GPAI), India has aligned itself with nations that promote responsible AI grounded in transparency, diversity, and inclusivity​. India's stance in international forums has been cautious, emphasizing the need for human control in military AI applications and adherence to international laws like the Geneva Conventions. India’s approach aims to balance AI development with a focus on protecting individual privacy and upholding ethical standards.
However, India’s military applications of AI are still in the early stages of development, and while India participates in the dialogue on LAWS, it has not committed to a clear regulatory framework for AI in warfare. India's involvement in global governance forums like the GPAI reflects its intent to play an active role in shaping international standards, yet its domestic capabilities and AI readiness in the defense sector need further strengthening​.
China’s Approach: AI for Strategic Dominance
In contrast, China’s AI strategy is driven by its pursuit of global dominance in technology and military power. China's "New Generation Artificial Intelligence Development Plan" (2017) explicitly calls for integrating AI across all sectors, including the military​. This includes the development of autonomous systems that enhance China's military capabilities in surveillance, cyber warfare, and autonomous weapons. China's approach to AI governance emphasizes national security and technological leadership, with significant state investment in AI research, especially in defense.
While China participates in international AI discussions, it has been more reluctant to commit to restrictive regulations on LAWS. China's participation in forums like the ISO/IEC Joint Technical Committee for AI standards reveals its intent to influence international AI governance in ways that align with its strategic interests​. China's reluctance to adopt stringent ethical constraints on military AI reflects its broader ambitions of using AI to achieve technological superiority, even if it means bypassing some of the ethical concerns raised by other nations.
The Need for Global AI Regulations in Warfare
The divergence between India and China’s positions underscores the complexities of establishing a universal framework for AI governance in military contexts. While India pushes for ethical AI, China's approach highlights the tension between technological advancement and ethical oversight. The risk of unregulated AI in warfare lies in the potential for escalation, as autonomous systems can make decisions faster than humans, increasing the risk of unintended conflicts.
International efforts, such as the CCW discussions, must reconcile these differing national interests while prioritizing global security. A comprehensive regulatory framework that ensures meaningful human control over AI systems, transparency in decision-making, and accountability for violations of international laws is essential to mitigate the ethical risks posed by military AI​.
Conclusion
The ethical dilemmas surrounding AI in warfare are vast, ranging from concerns about human accountability to the potential for indiscriminate violence. India’s cautious and ethical approach contrasts sharply with China’s strategic, technology-driven ambitions. The global community must work towards creating binding regulations that reflect both the ethical considerations and the realities of AI-driven military advancements. Only through comprehensive international cooperation can the risks of AI warfare be effectively managed and minimized.
0 notes
inkskinned · 9 days ago
Text
i have chronic pain. i am neurodivergent. i understand - deeply - the allure of a "quick fix" like AI. i also just grew up in a different time. we have been warned about this.
15 entire years ago i heard about this. in my forensics class in high school, we watched a documentary about how AI-based "crime solving" software was inevitably biased against people of color.
my teacher stressed that AI is like a book: when someone writes it, some part of the author will remain within the result. the internet existed but not as loudly at that point - we didn't know that AI would be able to teach itself off already-biased Reddit threads. i googled it: yes, this bias is still happening. yes, it's just as bad if not worse.
i can't actually stop you. if you wanna use ChatGPT to slide through your classes, that's on you. it's your money and it's your time. you will spend none of it thinking, you will learn nothing, and, in college, you will piss away hundreds of thousands of dollars. you will stand at the podium having done nothing, accomplished nothing. a cold and bitter pyrrhic victory.
i'm not even sure students actually read the essays or summaries or emails they have ChatGPT pump out. i think it just flows over them and they use the first answer they get. my brother teaches engineering - he recently got fifty-three copies of almost-the-exact-same lab reports. no one had even changed the wording.
and yes: AI itself (as a concept and practice) isn't always evil. there's AI that can help detect cancer, for example. and yet: when i ask my students if they'd be okay with a doctor that learned from AI, many of them balk. it is one thing if they don't read their engineering textbook or if they don't write the critical-thinking essay. it's another when it starts to affect them. they know it's wrong for AI to broad-spectrum deny insurance claims, but they swear their use of AI is different.
there's a strange desire to sort of divorce real-world AI malpractice over "personal use". for example, is it moral to use AI to write your cover letters? cover letters are essentially only templates, and besides: AI is going to be reading your job app, so isn't it kind of fair?
i recently found out that people use AI as a romantic or sexual partner. it seems like teenagers particularly enjoy this connection, and this is one of those "sticky" moments as a teacher. honestly - you can roast me for this - but if it was an actually-safe AI, i think teenagers exploring their sexuality with a fake partner is amazing. it prevents them from making permanent mistakes, it can teach them about their bodies and their desires, and it can help their confidence. but the problem is that it's not safe. there isn't a well-educated, sensitive AI specifically to help teens explore their hormones. it's just internet-fed cycle. who knows what they're learning. who knows what misinformation they're getting.
the most common pushback i get involves therapy. none of us have access to the therapist of our dreams - it's expensive, elusive, and involves an annoying amount of insurance claims. someone once asked me: are you going to be mad when AI saves someone's life?
therapists are not just trained on the book, they're trained on patient management and helping you see things you don't see yourself. part of it will involve discomfort. i don't know that AI is ever going to be able to analyze the words you feed it and answer with a mind towards the "whole person" writing those words. but also - if it keeps/kept you alive, i'm not a purist. i've done terrible things to myself when i was at rock bottom. in an emergency, we kind of forgive the seatbelt for leaving bruises. it's just that chat shouldn't be your only form of self-care and recovery.
and i worry that the influence chat has is expanding. more and more i see people use chat for the smallest, most easily-navigated situations. and i can't like, make you worry about that in your own life. i often think about how easy it was for social media to take over all my time - how i can't have a tiktok because i spend hours on it. i don't want that to happen with chat. i want to enjoy thinking. i want to enjoy writing. i want to be here. i've already really been struggling to put the phone down. this feels like another way to get you to pick the phone up.
the other day, i was frustrated by a book i was reading. it's far in the series and is about a character i resent. i googled if i had to read it, or if it was one of those "in between" books that don't actually affect the plot (you know, one of those ".5" books). someone said something that really stuck with me - theoretically you're reading this series for enjoyment, so while you don't actually have to read it, one would assume you want to read it.
i am watching a generation of people learn they don't have to read the thing in their hand. and it is kind of a strange sort of doom that comes over me: i read because it's genuinely fun. i learn because even though it's hard, it feels good. i try because it makes me happy to try. and i'm watching a generation of people all lay down and say: but i don't want to try.
4K notes · View notes
freewatermelon0 · 1 year ago
Text
Google is hiding the truth, and playing with words to change the facts, and to change the truth about what happened in Nuseirat camp, it's literally censoring the Nuseirat Massacre.
2K notes · View notes
m-albalawi · 9 months ago
Text
Our Daily Life in Gaza 💔
We search for water, firewood, food, electricity, we spend our day searching for the basics of life and trying to survive !! 😔
I'm talking to the human in your heart,, Please, Help Save My Family To Survive 🙏💔
Vetted By @90-ghost , @riding-with-the-wild-hunt ✅
Every Donation, No Matter How Small, it Really Helps 😭
If you think we are joking about our lives, look away, but don't forget that we are human..🥀
1K notes · View notes
wordsmith30 · 2 years ago
Text
You know what makes me the most upset about the use of AI in our culture? It's not just removing artists from art or devaluing human creativity -- it's treating people like they're disposable.
Oh, you're not that special. We have computers to do that now. If you died tomorrow, we have your image. We have your voice. We have your biometric data. We can just duplicate you, it's no problem. Who needs flesh and blood? Who needs agency and free thought? Who needs the human soul? You're just a tool. And when we're done with you, we'll just toss you aside and find someone else.
Creatives, listen to me, and listen to me good: you have a voice and it matters. There is no one in the history of the world who is exactly like you, in this time or this place. There is no one who thinks like you, acts like you, speaks like you, moves like you. There is nobody else built like you. Nobody else with your unique experiences and outlook of the world. You are a product of history, of culture, of art, of love, of pain, of possibility. Don't let them take that from you.
8K notes · View notes
thedailyplatypics · 1 year ago
Text
Tumblr media Tumblr media
ARTISTS, STOP POSTING TO DEVIANT ART
(Gen Ai/BDS)
Learned today that DeviantArt is owned by Wix, an Israeli company listed under boycott by the BDS Palestinian human rights organization.
Tumblr media
Under Wix, who acquired DeviantArt in 2017, DV has been pushing Israeli occupation propaganda and allowing Generative Ai to completely takeover the platform and be sold on it.
It’s clear that under Wix, DeviantArt DOES NOT CARE whatsoever about the art or the artists it was originally created to cater towards. It only cares for profit.
Even if you are not well-versed on the current politics surrounding the Israeli Occupation and the erasure of Palestine for some reason, everyone can agree that Wix has changed DeviantArt for the worse and the best case scenario is that they sell it to someone who actually cares for art, not profit.
I absolutely adore to death so much of the art there, but for now I will stop posting my art there and I suggest that other artists do the same. They DO NOT DESERVE YOUR ART.
Please transition towards using alternatives like Tumblr, Insta, Twitter, and Newgrounds. Newgrounds especially is the best alternative to DeviantArt. Please suggest other alternatives as well.
Tumblr media
1K notes · View notes
progressive-memes · 9 months ago
Text
Tumblr media
640 notes · View notes
chromaherder · 2 years ago
Text
Tumblr media Tumblr media
Mainstream sci-fi loves to insist on having drab looking machines as tools of war and oppression almost as a self fulfilling prophecy. But what if, hear me out, we started considering a future with more humane AI and healthier relations to different modes of intelligence (ie. the entire non-human being population of Earth)? 🤔
3K notes · View notes
nando161mando · 13 days ago
Text
Tumblr media
Never let his legacy die.
192 notes · View notes
fortunaestalta · 1 year ago
Text
Tumblr media
670 notes · View notes
rubicon-art · 27 days ago
Text
Tumblr media
I was overcome with the need to make this for some reason.
55 notes · View notes
greekmythcomix · 5 months ago
Text
If you’re in the UK and a creative, it is vital that you read and respond to this public survey about copyright law and AI training.
AI technology must be very carefully regulated, to ensure public safety and to maintain the rights of creators. This consultation seems to favour AI companies, suggesting to give them a broad copyright exception, almost penalising creators, putting the burden on them to opt-out.
Below is a set of useful resources for responding to the survey, your MP, and your representatives. It closes at 23:59 on 25 February 2025 so we have 10 weeks to do it - LET’S GO
Resources by Ed Newton-Rex on BlueSky:
Tumblr media
Find your MP: https://members.parliament.uk/members/Commons
Template letter to send to your MP: https://docs.google.com/document/d/1XtqaGRLcs6o4F9maphl8TRM6BDwZkNi-553Ky7H6FKQ
Template letter to send to your representatives: https://docs.google.com/document/d/1VTY6TkiOPF9Xc9AUMn7TzW8tGOR2dJ0euhbDrQQTu1I
Template letter to respond to the consultation: https://docs.google.com/document/d/12wpfkBnCZPJpVqch1pz3U4VfQ1sDlDJek3my6CqIfCk
Email address for consultation response: [email protected].
132 notes · View notes
palinecrosis · 2 months ago
Text
“how are you anti ai but like dbh? did you even play the game?”
did you play the game? genuine question, how many of you have played dbh and the lesson you learned was “we need to embrace ai” because that is absolutely not what it’s about.
humans are the ones responsible for the sentience of androids. they’re the ones responsible for their slavery and creation. they’re the ones who made androids to serve them, to make their life easier. and when they fought back they regretted funding their creation. because now, their exploitation, previously aimed at humans, can’t be justified anymore.
people like ai because it allows them to be lazy, carefree. you don’t have to learn how to draw, you don’t need to refine your tools or your your art style when you can just ask a program to generate a piece for you. you don’t need to learn how to write, come up with prompts, spend years finding your style and fixing your vocabulary, go through phases of horrible and cringeworthy writing, because guess what? you can ask chatgpt to write it for you.
and when corporations discover that they will use it to their advantage, replacing humans with ai. so 30 years down the line, when a machine enters your work force, does your job 10x better than you and lands you homeless, of fucking course you’re going to be angry and android hating.
the issue that dbh addresses is (in that universe) blaming sentient ai for the evil that corporations commit. again, they created ai, they created it so that it has the possibility of being sentient, using it to do jobs no one wants to do, take it even further and make them do jobs (arguably) to replace marginalised people who need those jobs. so the “bad guy” in dbh aren’t the rightfully angry citizens, who have no concept or understanding of deviancy, and it’s not androids either, it’s fucking elijah kamski. and all the other fuckers at the top. they create infighting between workers to distract from class differences.
if ai became sentient it’d absolutely be morally wrong to mistreat them, because they have consciousness and emotions. being anti ai is being against narrow and generative ai which is 1. bad for the environment 2. is theft!! not fucking hypothetical robots who possibly have feelings. improve your media literacy people.
63 notes · View notes
osteochondraldefect · 6 months ago
Text
Tumblr media
Sweet reward for obeying commands
147 notes · View notes
loki-zen · 7 months ago
Text
Cynical prediction (non-election-related):
One big eventual consequence of widespread genAI implementation is going to be the revelation that a lot of things we assumed were checked by more than one person before they went out to the general public never actually were, not in the detail you'd hope for at least - we've actually been relying on the diligence and competence of individuals who were not actually tested on or rewarded for displaying this.
129 notes · View notes