#artificial intelligence regulation
Explore tagged Tumblr posts
Text
AI का जिन्न बोतल से बाहर: उपराष्ट्रपति जगदीप धनखड़ ने विनियमन की जरूरत पर दिया जोर, चेताया- हो सकती है तबाही
Jagdeep Dhankhar: नई दिल्ली में शुक्रवार, 4 अप्रैल 2025 को उपराष्ट्रपति जगदीप धनखड़ ने आर्टिफिशियल इंटेलिजेंस (AI) के विनियमन को लेकर एक अहम और विचारणीय बयान दिया। राज्यसभा सांसद सुजीत कुमार की पुस्तक “AI ऑन ट्रायल” के विमोचन के मौके पर उन्होंने कहा कि AI का सही ढांचा ही य��� तय करेगा कि हमारा समाज भविष्य में किस दिशा में जाएगा। उनके शब्दों ने न सिर्फ तकनीकी विशेषज्ञों, बल्कि आम लोगों का भी ध्यान…
#AI governance framework#AI on Trial book launch#AI risks and benefits#Artificial Intelligence regulation#citizen rights AI#cyber sovereignty India#digital dystopia warning#Jagdeep Dhankhar AI speech#responsible innovation AI#Vice President India AI
0 notes
Text
The Paradox of Goodness and Inhumanity: A Rebuttal to Power
Abstract The longstanding tension between those deemed “evil” and the “good” forces opposing them hinges on a cyclical narrative of inhumanity. This article explores the ethical dimensions of retaliation, questioning the logic of asymmetric morality. Drawing on the works of Foucault, Arendt, and contemporary researchers on power dynamics, we aim to deconstruct the conceptual framework that…
#anti-corruption strategies#anti-terrorism policies#arms control#artificial intelligence regulation#asylum issues#climate adaptation strategies#climate change diplomacy#climate change migration#climate finance#climate resilience#conflict prevention strategies#conflict resolution mechanisms#cross-border cooperation#cross-border employment policy#cross-border investments#cross-border relations#cross-border supply chains#cultural diplomacy#cultural preservation#cybersecurity policies#decolonization politics#democracy promotion#democratic governance#development aid#development cooperation frameworks#developmental economics#digital diplomacy#digital governance#digital trade policies#digital transformation strategies
0 notes
Text
California Poised to Further Regulate Artificial Intelligence by Focusing on Safety
Looking to cement the state near the forefront of artificial intelligence (AI) regulation in the United States, on August 28, 2024, the California State Assembly passed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), also referred to as the AI Safety Act. The measure awaits the signature of Governor Gavin Newsom. This development comes effectively on…
#AI development#AI law#AI news#AI regulation#artificial intelligence regulation#business#California#California law#government#legal
0 notes
Note
Kaveh is like this close to committing a sin with Mehrak, like he's playing a really dangerous game. It's just a matter of time Mehrak gets even more sentient when it's already this far. I know Kaveh keeps it in check, but man... I hope they'll talk about this one day. Either that or Kaveh will always brush it off, as if he didn't get some ancient core to build it??? Also, I always love your thoughts, thanks for loving these two so much!
hiya! thank you for your ask! i'm so glad you enjoy my posts :") <3 Mehrak’s existence is so ??? funny to me. we have tighnari’s story quest detailing the akademiya’s ban on research into mechanical lifeforms, directly alongside kaveh building his own mechanical lifeform and parading it around, sending it on solo coffee retrieving missions whilst everyone in sumeru looks on smiling <3
Mehrak’s legality hasn’t been mentioned at all in-game as of now, and as it’s been used consistently, both in kaveh’s hangout, a parade of providence, and now nahida’s birthday event, with not one mention of legality or potential trespasses, makes it seem that that’s how things will stay - especially since Cyno has met/is aware of mehrak’s existence during the battle scene during a parade of providence (then again, cyno did meet the wanderer during this event, and yet in nahida’s birthday event it seems he’s only HEARD of the wanderer through sethos??) but even then, since Cyno trusts Tighnari with karkata’s continued existence, it’s likely not a stretch to say that to Cyno, Kaveh can be trusted with Mehrak’s existence (it’s all very iffy)
Mehrak’s existence, overall, has had little focus other than its usage in battle, its official introduction in a parade of providence, where kaveh stipulates it has low intelligence, and was built to assist him, as well as being incapable of talking back and giving him ‘attitude’ (implicitly comparing mehrak to alhaitham), and in kaveh’s hangout when he works on designing a building. It’s only in recent events, such as cyno’s second story quest, and now in nahida’s birthday event, that mehrak has gotten more mentions, and now a spotlight, which is all in relation to coffee, tying back to alhaitham and kaveh’s improved relationship (the coffee analysis will be in the updated essay finally!!). as of right now, overall, mehrak doesn’t appear to be a major focus
It might be strange for the game to mention now that mehrak has been an illegal creation all this time, unless it’s a significant plot point that has to be resolved, but if mehrak is further explored, like in the temple of silence for example (hoyoverse I am once again asking), then perhaps this collective ignoring of a crime occurring will be explained away, if mentioned at all? It’s interesting that tighnari says it might be possible that this ban is reversed in the future, but as for whether that will actually happen, and the implications of this, aren’t clear
Mehrak’s accepted existence in general poses so many questions. I’m interested in the specifics of the ban, like does it depend on the autonomy of the machine in question? Abattouy aimed to make Karkata essentially human, capable of individual thought, processing, emotion, and conversation, which definitively broaches on the intersection of mechanical and biological life which caused the Akademiya to ban this type of research in the first place. So if a machine is able to act on its own, irrespective of human interference, then this is what the akademiya would want to prevent
In mehrak’s case, it’s unclear as to what its limits are, but from what has been shown so far, it seems that mehrak can only act on kaveh’s commands and when held in battle – it’s uncertain rn whether mehrak can act independently of this, but as kaveh invented it to only assist in certain matters, it’s doubtful. But then again, we don’t have a great scope of whether it can experience emotions, as it has shown signs of being distressed in a parade of providence when kaveh states that it can’t talk back, and when being scolded by kaveh in nahida’s birthday event
if mehrak has limited intelligence, it's interesting to compare mehrak with karkata. abattouy was attempting to make karkata understand human language, and be able to respond in order to have conversation, which was proved impossible, whereas although mehrak only speaks in beeps, kaveh is shown to have a thorough understanding of what it’s saying? Mehrak can be programmed to recognise people’s voices, but seemingly also language, as mehrak can obey spoken command, which is what abattouy tried to accomplish but was unable to with modern technology.
Mehrak, on the other hand, understands kaveh’s basic requests – which is made even funnier in kaveh’s old sketchbook, where he says that more than anything he really wants mehrak to understand what he’s saying. he got his wish but at what cost???
Mehrak being made from ancient technology, belonging to that of king deshret’s civilisation, offers many interesting paths that could be explored in future events, as besides the primal constructs roaming around, the puzzles in the desert, and now the temple of silence, no technology really exists from that time. Someone commented that mehrak’s presence in nahida’s birthday event, in conjunction with the event being based around ancient technology with the wedjat eye, could be highlighting mehrak’s irregularity in modern day sumeru – potentially foreshadowing for a future event that could further expand upon mehrak? If this is the case, I am all for it, there are so many questions concerning kaveh’s little light <3
#haikaveh#kavetham#alhaitham#kaveh#genshin impact#in general my head is so empty when it comes to mehrak so thank you for giving me a chance to explore this#there's this contrast of ancient technology and modern technology that is interesting to me as well#since ancient technology has already achieved low intelligence life (seemingly) without cruel experiments or danger that comes with#modern attempts of creating artificial life and is why the akademiya banned this research direction#ancient technology exists separately to modern regulations so in that case would there be a distinction in the ban between ancient#technology and modern technology? i feel the answer is no but also mehrak should legally be dismantled so i'm not sure what is happening#maybe the next sumeru event will be jailbreaking kaveh and mehrak
73 notes
·
View notes
Text
sign this petition!!
Please sign this petition before July 2nd!! Artificial intelligence comes with a lot of drawbacks from its environmental cost to it stealing peoples art and even perpetuating racism and discrimination. Regulations are basically the only way to make sure AI is used in beneficial rather than harmful ways and by banning people from creating new regulations and even deeply hindering the ones that currently exist, the problems caused by AI won’t only fail to improve but also likely get worse. Furthermore artificial intelligence is also being actively promoted to be used in the government so that in tandem with the regulation ban will essentially lead national security to be thrown out the window. People tend to forget basic internet safety on forums but almost everyone forgets it when asking an AI it to do whatever tedious task that needs to be done. Even if you don’t use AI or aren’t an artist, you’re still affected by this so please consider signing. Again you can click this link or scan the QR, every person helps!! Thanks!!
#artificial intelligence#artists against ai#current events#petition#civic action#Stop the ban on AI regulations
32 notes
·
View notes
Text

Feb 14 (Reuters) - The rise of "pig butchering" scams and the increasing use of generative artificial intelligence likely lifted revenues from crypto scams to a record high in 2024, according to blockchain analytics firm Chainalysis.
Revenue from pig butchering scams, where perpetrators cultivate relationships with individuals and convince them to participate in fraudulent schemes, increased nearly 40% in 2024 from the previous year, the firm estimated in a report published on Thursday.
Revenue in 2024 from crypto scams was at least $9.9 billion, although the figure could rise to a record high of $12.4 billion once more data becomes available, it said.
"Crypto fraud and scams have continued to increase in sophistication," Chainalysis researchers said.
The company pointed to marketplaces that support pig butchering operations and the use of GenAI as factors making it easier and cheaper for scammers to expand operations.
https://www.reuters.com/technology/crypto-scams-likely-set-new-record-2024-helped-by-ai-chainalysis-says-2025-02-14/
#crypto#reuters#bitcoin#ethereum#money#finance#economy#ai#artificial intelligence#politics#political#us politics#news#cash#digital currency#bitlocker#digita wallet#crypto exchange#blockchain#financial#economic#economics#non-fungible token#NFT#stablecoin#virtual currency#bitcoin mining#government#regulation#scams
15 notes
·
View notes
Text
Republicans attempt to mandate deregulation of AI for tech tycoons and fossil fuel.
They want to stop all the protections and oversight of automation completely in the country, and handcuff state and local efforts to protect communities and save lives.
404 Media - Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill Emanuel Maiberg · May 12, 2025 at 10:09 AM “...no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act,” says the text of the bill introduced Sunday night by Congressman Brett Guthrie of Kentucky, Chairman of the House Committee on Energy and Commerce. That language of the bill, how it goes on to define AI and other “automated systems,” and what it considers “regulation,” is broad enough to cover relatively new generative AI tools and technology that has existed for much longer. In theory, that language will make it impossible to enforce many existing and proposed state laws that aim to protect people from and inform them about AI systems. (emphasis added)
This is straight out of the anti public health and climate denial legislative playbook.
Unfortunately in Pennsylvania Governor Josh Shapiro is forcing AI tools into state government and I'm sure employees feel pressured to use such "tools" even if they do damage, because they want to keep their jobs after all, by keeping their deadlines. The incentive usually being to get things done, not necessarily do things right — and chatbots have the uncanny ability to make things look and sound believable even when they're incorrect. In my opinion, this technology belongs nowhere near the people's business.
My letter to reps:
AI and chatbots absolutely need regulation and oversight. And if the feds won't, the states have a right and a responsibility to do so. The idea of handcuffing us from protecting anyone is like saying you want dirty drinking water. I don't want this everywhere. AI is an out of control financial bubble wasting fossil fuel. It outputs inaccurate information, causing threats to patient safety in healthcare settings and has directly led to illness and even death. An AI generated foraging book told people to eat poison mushrooms. Fake childrens books full of nonsense. It can't even do what they say. It seems only to be able to create convincing sounding BS, use energy and water and land that communities desperately need. And it's functioning as an Oracle of Chatbot thing where unwitting people are being told they're connected to a spirit in the universe. Some would surely call this blasphemy. The pope himself has specifically called out the need to defend human dignity against AI. So why do politicians think they should stifle the freedom to protect communities?
Please feel free to copy or repurpose for your own letters to reps.
Pope Leo XIV throws down the gauntlet against AI hype. At his first "working meeting", the pope outlined his priorities to include defending "human dignity" against the threat of AI, citing the dehumanizing effects of the industrial revolution. Chloe Humbert May 13, 2025
Republican state legislators in PA float punitive control of local governments forcing favour to fossil fuel. Republican lawmakers want to punish municipalities attempting to protect local residents and public safety, and impede their compensation for environmental destruction by gas drillers. Chloe Humbert Oct 24, 2024
Lying AI should not be doing the people's business or science. Lives are at stake and the U.S. government and scientific scholars are buying into tech hype boondoggles. Is it corruption, incompetence, or sabotage? Chloe Humbert Mar 22, 2024
More references can be found on these:
The oracle of chatbot phenomenon is not benign. Don't Wait For Everybody - Episode 022 Chloe Humbert May 07, 2025
It's imperative to inform politicians about tech scams that target their ideological hopes with false promises. Crypto mogul donors lure politicians to betray their communities by saying they're doing it to benefit their communities. It's a trick. Chloe Humbert Apr 14, 2025
#deregulation#republicans#ai hype#chatbots#automation#regulation and deregulation of industry#regulations#government#politics#misinformation#laws#public safety#false promises#crypto#oracle of chatbot#LLMs#AI#tech tycoons#fossil fuel#fossil fuel industry#artificial intelligence#politicians#scams#state government
7 notes
·
View notes
Text
The Future of Justice: Navigating the Intersection of AI, Judges, and Human Oversight
One of the main benefits of AI in the justice system is its ability to analyze vast amounts of data and identify patterns that human judges may not notice. For example, the use of AI in the U.S. justice system has led to a significant reduction in the number of misjudgments, as AI-powered tools were able to identify potential biases in the data and make more accurate recommendations.
However, the use of AI in the justice system also raises significant concerns about the role of human judges and the need for oversight. As AI takes on an increasingly important role in decision-making, judges must find the balance between trusting AI and exercising their own judgement. This requires a deep understanding of the technology and its limitations, as well as the ability to critically evaluate the recommendations provided by AI.
The European Union's approach to AI in justice provides a valuable framework for other countries to follow. The EU's framework emphasizes the need for human oversight and accountability and recognizes that AI is a tool that should support judges, not replace them. This approach is reflected in the EU's General Data Protection Regulation (GDPR), which requires AI systems to be transparent, explainable and accountable.
The use of AI in the justice system also comes with its pitfalls. One of the biggest concerns is the possibility of bias in AI-generated recommendations. When AI is trained with skewed data, it can perpetuate and even reinforce existing biases, leading to unfair outcomes. For example, a study by the American Civil Liberties Union found that AI-powered facial recognition systems are more likely to misidentify people of color than white people.
To address these concerns, it is essential to develop and implement robust oversight mechanisms to ensure that AI systems are transparent, explainable and accountable. This includes conducting regular audits and testing of AI systems and providing clear guidelines and regulations for the use of AI in the justice system.
In addition to oversight mechanisms, it is also important to develop and implement education and training programs for judges and other justice professionals. This will enable them to understand the capabilities and limitations of AI, as well as the potential risks and challenges associated with its use. By providing judges with the necessary skills and knowledge, we can ensure that AI is used in a way that supports judges and enhances the fairness and accountability of the justice system.
Human Centric AI - Ethics, Regulation. and Safety (Vilnius University Faculty of Law, October 2024)
youtube
Friday, November 1, 2024
#ai#judges#human oversight#justice system#artificial intelligence#european union#general data protection#regulation#bias#transparency#accountability#explainability#audits#education#training#fairness#ai assisted writing#machine art#Youtube#conference
6 notes
·
View notes
Text
I’m probably going to take an AI astronomy course next year (2026-27) and I think I might have to leave tumblr for my mental health. The amount of misunderstanding of AI is rampant here and the arguments are bullshit.
For clarification: ChatGPT, Gemini, etc are bad because they’re trained on stolen data. Which is legal because AI is unregulated. Y’all are fighting the wrong cause.
I really don’t to leave this site because it’s usually really good about pivoting once more information comes to light but this really hasn’t happened. I can’t keep up the contradiction of the bombardement of information that I’m a horrible person in a field destroying all artists (including myself, I’m disabled, my income is variable and various) and also that I am doing important research in astronomy (no, not astrology or space travel. My dream job is to study PHOs — asteroids that could obliterate us).
#weird strange and awful science#artificial intelligence#potentially hazardous objects#astronomy#regulate ai#[artificial intelligence]#<-in brackets because tumblr won’t let me tag it
5 notes
·
View notes
Text







Upholding Fundamental Rights in the Age of Intelligent Machines [🤖]
2 notes
·
View notes
Text
AI CEOs Admit 25% Extinction Risk… WITHOUT Our Consent!
AI leaders are acknowledging the potential for human extinction due to advanced AI, but are they making these decisions without public input? We discuss the ethical implications and the need for greater transparency and control over AI development.
#ai#artificial intelligence#ai ethics#tech ethics#ai control#ai regulation#public consent#democratic control#super intelligence#existential risk#ai safety#stuart russell#ai policy#future of ai#unchecked ai#ethical ai#superintelligence#ai alignment#ai research#ai experts#dangers of ai#ai risk#uncontrolled ai#uc berkeley#computer science
2 notes
·
View notes
Text
AI Safety Between Scylla and Charybdis and an Unpopular Way Forward
I am unabashedly a technology optimist. For me, however, that means making choices for how we will get the best out of technology for the good of humanity, while limiting its negative effects. With technology becoming ever more powerful there is a huge premium on getting this right as the downsides now include existential risk.
Let me state upfront that I am super excited about progress in AI and what it can eventually do for humanity if we get this right. We could be building the capacity to turn Earth into a kind of garden of Eden, where we get out of the current low energy trap and live in a World After Capital.
At the same time there are serious ways of getting this wrong, which led me to write a few posts about AI risks earlier this year. Since then the AI safety debate has become more heated with a fair bit of low-rung tribalism thrown into the mix. To get a glimpse of this one merely needs to look at the wide range of reactions to the White House Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. This post is my attempt to point out what I consider to be serious flaws in the thinking of two major camps on AI safety and to mention an unpopular way forward.
First, let’s talk about the “AI safety is for wimps” camp, which comes in two forms. One is the happy-go-lucky view represented by Marc Andreessen’s “Techno-Optimist Manifesto” and also his preceding Tweet thread. This view dismisses critics who dare to ask social or safety questions as luddites and shills.
So what’s the problem with this view? Dismissing AI risks doesn’t actually make them go away. And it is extremely clear that at this moment in time we are not really set up to deal with the problems. On the structural risk side we are already at super extended income and wealth inequality. And the recent AI advances have already been shown to further accelerate this discrepancy.
On the existential risk side, there is recent work by Kevin Esvelt et al. showing how LLMs can broaden access to pandemic agents. Jeffrey Ladish et. al. demonstrating how cheap it is to remove safety training from an open source model with published weights. This type of research clearly points out that as open source models become rapidly more powerful they can be leveraged for very bad things and that it continues to be super easy to strip away the safeguards that people claim can be built into open source models.
This is a real problem. And people like myself, who have strongly favored permissionless innovation, would do well to acknowledge it and figure out how to deal with it. I have a proposal for how to do that below.
But there is one intellectually consistent way to continue full steam ahead that is worth mentioning. Marc Andreessen cites Nick Land as an inspiration for his views. Land in Meltdown wrote the memorable line “Nothing human makes it out of the near-future”. Embracing AI as a path to a post-human future is the view embraced by the e/acc movement. Here AI risks aren’t so much dismissed as simply accepted as the cost of progress. My misgiving with this view is that I love humanity and believe we should do our utmost to preserve it (my next book which I have started to work on will have a lot more to say about this).
Second, let’s consider the “We need AI safety regulation now” camp, which again has two subtypes. One is “let regulated companies carry on” and the other is “stop everything now.” Again both of these have deep problems.
The idea that we can simply let companies carry on with some relatively mild regulation suffers from three major deficiencies. First, this has the risk of leading us down the path toward highly concentrated market power and we have seen the problems of this in tech again and again (it has been a long standing topic on my blog). For AI market power will be particularly pernicious because this technology will eventually power everything around us and so handing control to a few corporations is a bad idea. Second, the incentives of for-profit companies aren’t easily aligned with safety (and yes, I include OpenAI here even though it has in theory capped investor returns but also keeps raising money at ever higher valuations, so what’s the point?).
But there is an even deeper third deficiency of this approach and it is best illustrated by the second subtype which essentially wants to stop all progress. At its most extreme this is a Ted Kaczynsci anti technology vision. The problem with this of course is that it requires equipping governments with extraordinary power to prevent open source / broadly accessible technology from being developed. And this is an incredible unacknowledged implication of much of the current pro-regulation camp.
Let me just give a couple of examples. It has long been argued that code is speech and hence protected by first amendment rights. We can of course go back and revisit what protections should be applicable to “code as speech,” but the proponents of the “let regulated companies go ahead with closed source AI” don’t seem to acknowledge that they are effectively asking governments to suppress what can be published as open source (otherwise, why bother at all?). Over time government would have to regulate technology development ever harder to sustain this type of regulated approach. Faster chips? Government says who can buy them. New algorithms? Government says who can access them. And so on. Sure, we have done this in some areas before, such as nuclear bomb research, but these were narrow fields, whereas AI is a general purpose technology that affects all of computation.
So this is the conundrum. Dismissing AI safety (Scylla) only makes sense if you go full on post humanist because the risks are real. Calling for AI safety through oversight (Charybdis) doesn’t acknowledge that way too much government power is required to sustain this approach.
Is there an alternative option? Yes but it is highly unpopular and also hard to get to from here. In fact I believe we can only get there if we make lots of other changes, which together could take us from the Industrial Age to what I call the Knowledge Age. For more on that you can read my book The World After Capital.
For several years now I have argued that technological progress and privacy are incompatible. The reason for this is entropy, which means that our ability to destroy will always grow faster than our ability to (re)build. I gave a talk about it at the Stacks conference in Berlin in 2018 (funny side note: I spoke right after Edward Snowden gave a full throated argument for privacy) and you can read a fuller version of the argument in my book.
The only solution other than draconian government is to embrace a post privacy world. A world in which it can easily be discovered that you are building a super dangerous bio weapon in your basement before you have succeeded in releasing it. In this kind of world we can have technological progress but also safeguard humanity – in part by using aligned super intelligences to detect what is happening. And yes, I believe it is possible to create versions of AGI that have deep inner alignment with humanity that cannot easily be removed. Extremely hard yes, but possible (more on this in upcoming posts on an initiative in this direction).
Now you might argue that a post privacy world also requires extraordinary state power but that's not really the case. I grew up in a small community where if you didn't come out of your house for a day, the neighbors would check in to make sure you were OK. Observability does not require state power per se. Much of this can happen simply if more information is default public. And so regulation ought to aim at increased disclosure.
We are of course a long way away from a world where most information about us could be default public. It will require massive changes from where we are today to better protect people from the consequences of disclosure. And those changes would eventually have to happen everywhere that people can freely have access to powerful technology (with other places opting for draconian government control instead).
Given that the transition which I propose is hard and will take time, what do I believe we should do in the short run? I believe that a great starting point would be disclosure requirements covering training inputs, cost of training runs, and powered by (i.e. if you launch say a therapy service that uses AI you need to disclose which models). That along with mandatory API access could start to put some checks on market power. As for open source models I believe a temporary voluntary moratorium on massively larger more capable models is vastly preferable to any government ban. This has a chance of success because there are relatively few organizations in the world that have the resources to train the next generation of potentially open source models.
Most of all though we need to have a more intellectually honest conversation about risks and how to mitigate them without introducing even bigger problems. We cannot keep suggesting that these are simple questions and that people must pick a side and get on with it.
7 notes
·
View notes
Text

3 notes
·
View notes
Text
In a silicon valley, throw rocks. Welcome to my tech blog.
Antiterf antifascist (which apparently needs stating). This sideblog is open to minors.
Liberation does not come at the expense of autonomy.
* I'm taking a break from tumblr for a while. Feel free to leave me asks or messages for when I return.
Frequent tags:
#tech#tech regulation#technology#big tech#privacy#data harvesting#advertising#technological developments#spyware#artificial intelligence#machine learning#data collection company#data analytics#dataspeaks#data science#data#llm#technews
2 notes
·
View notes
Text
New Post has been published on Books by Caroline Miller
New Post has been published on https://www.booksbycarolinemiller.com/musings/the-revolution-of-the-species/
The Revolution Of The Species

Senator John Fetterman (D) recently shared this observation with the public. You all should need to know that America is not sending their best and brightest to Washington, D. C. Congressional in-fighting, and scandals among the elected elite support the senator’s view. Bureaucrats add to the confusion. As specialists in their fields, they can run circles around the people’s representatives. For example, while Congress squabbles about sending money to support Ukraine’s war, the Secretary of the Treasury, Janet Yellen, proposes that President Joe Biden bypass the government’s legislative branch and delegate Russia’s frozen assets to Volodymyr Zelenskyy. Adding to the fog is technology, an industry politicians little know or understand. As a result, innovators in Silicon Valley have pursued Artificial Intelligence (AI) unfettered to a degree that it has become as great a danger to us as the atomic bomb. In 2018, the Brookings Institute issued a report on the benefits and dangers of AI and provided recommendations to ensure the technology did no harm. It collected dust like most reports. But now, five years later, tech giants, have come running to Congress seeking regulations, fearing they have released an evil genie from its bottle and hoping to spread the blame. While the tech world seeks legal protection from the potential damage their invention can do, the rest of us should consider what human traits these innovators have passed along to their powerful machines. Given our current capacity to blow up the planet’s resources, including polluting its air, what could go wrong? The advent of AI will alter our lives, no doubt, but it won’t create a blank slate upon which to build our utopian dream. As historian Timothy Snyder warns, we can’t avoid dragging into our new world the debris of the past. Economic inequality will be one such and should social mobility die, the scholar predicts democracy [will] give way to oligarchy, opening the door to tyranny. Donald Trump has given us a glimpse of that future, a society where citizens are encouraged to sleepwalk through their existence, obeying their leaders without question. What these sheep mustn’t see, says Snyder, is that most of those who held power in the past will continue to hold it in the future, making changes wrought by insurrections or revolutions largely an illusion. True, the technological revolution has brought a world of information to our fingertips, but the price has been the loss of our privacy — data that the oligarchs of AI gather and sell for their immense profit. Elon Must is one of these. Having accumulated much of the world’s capital, he imagines he owns the rest of us and dares to wade into international politics, changing the course of our lives without the authority of a single vote cast at the ballot box. Such hubris leaves us to ponder the legacy of these innovators. They have given us convenience and access to endless information, but they are the purveyors of disinformation and deep fakes too. By these means, society finds itself not merely divided but fractured, and to a degree that makes determining the public good seem impossible. Will their invention, AI, come to sense the frailty of our species? As repositories of all that we know, will they see how we have dehumanized ourselves by our obsession with money, pleasure, and the pursuit of war? If so, will these lungless servants become our masters, caring nothing about us and our environment? I doubt they will miss the meadowlark ‘s song. Forgive these dystopian questions, but it’s time to consider our status as naked apes. The universe takes little notice of us. And, Nature appears to be turning its back on our species. Or, perhaps, we were the first to turn away, preferring to focus on ourselves and the petty differences in our religions, the color of our skin, and our varying lifestyles. When inconsequential variations like these become matters of life and death, are we worthy of respect even from our miraculous machines? More likely, they will judge us against other creatures on the planet and find we are not the best and brightest. I must say that I have rarely seen a community come together in order to meet a common need in a manner as beautiful as that of a handful of birds at a feeder. Craig D. Lounsbrough
#architects of AI#artificial intelligence#Brookings Institute on AI#Congressional scandals#Craig D. Lounsbrough#dystopian question on AI#Elon Musk#Funding Ukraine war#Janet Yellen#regulation of AI#Senator John Fetterman#Timothy Snyder
2 notes
·
View notes
Text
In 2022, 2.2 billion people didn't have access to safely managed drinking water (source). You'd think doing something about that would be our priority, but no.
(Btw, here's the full article by The Standard shown in the screenshot. It's worth checking out for more details, including a few measures that tech companies could or are planning to take to tackle the problem.)

#another reason why ai use needs to be properly regulated#people need that water more than tech companies#artificial intelligence#ai issues#social justice#environment#sustainable development goals#sdg6#drinking water
121K notes
·
View notes