#friendly AI
Explore tagged Tumblr posts
Text
The allure of speed in technology development is a siren’s call that has led many innovators astray. “Move fast and break things” is a mantra that has driven the tech industry for years, but when applied to artificial intelligence, it becomes a perilous gamble. The rapid iteration and deployment of AI systems without thorough vetting can lead to catastrophic consequences, akin to releasing a flawed algorithm into the wild without a safety net.
AI systems, by their very nature, are complex and opaque. They operate on layers of neural networks that mimic the human brain’s synaptic connections, yet they lack the innate understanding and ethical reasoning that guide human decision-making. The haste to deploy AI without comprehensive testing is akin to launching a spacecraft without ensuring the integrity of its navigation systems. The potential for error is not just probable; it is inevitable.
The pitfalls of AI are numerous and multifaceted. Bias in training data can lead to discriminatory outcomes, while lack of transparency in decision-making processes can result in unaccountable systems. These issues are compounded by the “black box” nature of many AI models, where even the developers cannot fully explain how inputs are transformed into outputs. This opacity is not merely a technical challenge but an ethical one, as it obscures accountability and undermines trust.
To avoid these pitfalls, a paradigm shift is necessary. The development of AI must prioritize robustness over speed, with a focus on rigorous testing and validation. This involves not only technical assessments but also ethical evaluations, ensuring that AI systems align with societal values and norms. Techniques such as adversarial testing, where AI models are subjected to challenging scenarios to identify weaknesses, are crucial. Additionally, the implementation of explainable AI (XAI) can demystify the decision-making processes, providing clarity and accountability.
Moreover, interdisciplinary collaboration is essential. AI development should not be confined to the realm of computer scientists and engineers. Ethicists, sociologists, and legal experts must be integral to the process, providing diverse perspectives that can foresee and mitigate potential harms. This collaborative approach ensures that AI systems are not only technically sound but also socially responsible.
In conclusion, the reckless pursuit of speed in AI development is a dangerous path that risks unleashing untested and potentially harmful technologies. By prioritizing thorough testing, ethical considerations, and interdisciplinary collaboration, we can harness the power of AI responsibly. The future of AI should not be about moving fast and breaking things, but about moving thoughtfully and building trust.
#furtive#AI#skeptic#skepticism#artificial intelligence#general intelligence#generative artificial intelligence#genai#thinking machines#safe AI#friendly AI#unfriendly AI#superintelligence#singularity#intelligence explosion#bias
8 notes
·
View notes
Text
Remember, it's never too late to pick up a pencil and learn. It may not be easy at first, but trust me it'll be worth it! Like, I know my art isn't "perfect" but I try and that what counts! "It's the journey not the destination"
And for any of you AI bros out there, you don't have to steal, you can embrace what it means to be human and create!
#Plus#NOT making AI images is better for the environment#since generative AI is really bad for the earth#so drawing is more eco friendly too :)#sonic fanart#sonic fandom#amy rose#metal sonic#Never really tried doing sonic fanart before lol#anti ai#human made art#sth fanart#sega sonic
840 notes
·
View notes
Text
psyncer clover au ough
#ailen like ellen the jellyfish#clover is already an agent so this is totally possible imo#tried to keep the gyaru spirit of clover while still looking fight scene friendly#peep ailen looking like alice LOL girlfrens..#zero escape#ai the somnium files#ze aitsf au#my art
97 notes
·
View notes
Text
friendly reminder!! human art—mii art, in this case—is infinitely better!!! 💜🫶✏
————
still ver:
#soundleer's art#soundleer's miis#miis#just made a lil doodle and felt inspired to make a simple gif using only 4 frames (plus 6 duplicates)#hey friendly reminder that i will never support generative ai art! no slop matches our power!!#some ppl should be ashamed of calling themselves a human being like just pick up a pencil please xoxo#anyway im happy with how this piece turned out! i love experimenting things with medibang hyeee#ay it me Leer
153 notes
·
View notes
Text
The easiest and sure way of telling if something is written by ChatGPT is because it always has to, in whatever it writes, to wrap it up with some kind of conclusion (con un moño as I say), and it's VERY, VERY conspicous, there is ALWAYS a conclusion. It's always "in summary/in conclusion/overall/stands as a testament". It NEVER, EVER writes anything without some sort of conclusion, and even if you ask it not to, it always tries to somehow shoehorn some sort of conclusion, summary, or a moral or a happy ending (if you ask it for prose writing), it's deeply coded into it. Another thing, it is always wholesome, eager to please and sickengly optimist, like the most eager costumer service of all time (because it was designed for it and really nothing else), it is almost impossible to ask it to write anything cynical, it's really almost funny how it is.
So, vague, cynical, esoteric, incomprehensible, resentful and rambling pointless rants is the sign of something written by a real human.
#cosas mias#I believe the developers insisted hard into making it have a conclusion all the time to prevent it to rambling on#which ironically makes it hostile bland and less human-friendly to me#please do not add resentful and rambling pointless anti-AI rants here I know they are a sign of humanity but they're really tiresome#feel free to be cynical and rambling about anything else though
174 notes
·
View notes
Text
A Masterpost About Long-Acting Reversible Contraceptives (LARCs)
During the upcoming presidency, it is likely that people in the US will lose many options that keep them from getting pregnant (contraceptives). The right-wing Project 2025 is against birth control pills, abortion, emergency contraception, and the government-provided health insurance ("Obamacare," Medicaid, and Medicare) that helps people afford these.
If you or your partner are concerned about the possibility of losing access to those options soon, you can ask your doctor or Planned Parenthood about getting a Long-Acting Reversible Contraceptive (LARC). The two kinds of LARCs are IUDs and the implant. If you get a LARC right now, it can protect you for years, without you having to do anything to maintain it. A LARC isn't permanent, so you can get rid of it if you later decide that you're ready to have a baby.
Hormonal Intrauterine Device: 3, 5, or 8 years of protection, depending on brand
An IUD is a T-shaped object that a nurse or doctor puts into your uterus. It's tiny, just a little more than an inch. The procedure for getting an IUD isn't surgery, it lasts just a few minutes, and it goes much better if you ask for an anti-anxiety medicine and the right type of painkiller.
Hormonal IUDs work because they slowly release progestin. That's the main hormone in birth control pills. Like pills, they can make your periods get lighter or stop, which is helpful for people who need to get rid of cramps and PMS.
Of the brands of them in the US, the FDA currently approves of using Kyleena for up to five years, Liletta for eight, Mirena for eight, and Skyla for three. Kyleena and Skyla are smallest and therefore easiest to insert.
I have more info in my tags about IUDs.
Copper IUDs: 12 years of protection
The other type of IUD is a copper IUD. Instead of changing your hormones, it works because copper makes the place unfriendly to sperm. Another difference is that this kind can make your periods heavier. Its brand name is Paragard. The FDA approves of using it for ten years, but studies show it's still good at twelve or longer. More info in my tags.
The birth control implant: 5 years of protection
It's a rod the size of a matchstick. A nurse or doctor uses an applicator to put it under your skin in your arm. There, it will slowly release progestin to protect you from getting pregnant. It can make your periods get lighter or stop. The FDA approves of using it for three years, but a study shows it's still 100% effective five years later, and so does another study. Its brand name is Nexplanon, which has improvements over the older Implanon, such as being visible on X-ray. More info in my tags.
Some honorable mentions
There are some other contraceptives that last a long time but aren't considered LARCs. The diaphragm and the cervical cap are two kinds of plastic cap that you put on your cervix each time before sex, and you can keep using the same one for two years. The birth control ring, Annovera, lasts one year. Each injection of the birth control shot, Depo-Provera, lasts three months.
Only barrier methods such as condoms, internal condoms, and dental dams can protect against sexually transmitted infections. The right wing wants to stop people from getting condoms, too. That's another problem, but LARCs can help us get through the next four years without unplanned pregnancies.
#contraceptive#contraception#IUD#birth control#Project 2025#intrauterine device#long-acting reversible contraceptive#LARC#birth control implant#Mirena#Nexplanon#Paraguard#sex education#hormonal IUD#copper IUD#US politics#relevant for people who can get pregnant#please reblog and circulate this widely; you're also welcome to reblog from my older posts on these topics#if you don't want to see this content from my blog: i always tag thoroughly so you can blacklist the tags 'sex education' etc#rated PG-13#no-ai#screen reader friendly#menstruation#menstrual suppression#AFAB#relevant for transgender men and trans masculine nonbinary people and others on the female to male spectrum#relevant for cisgender women and transgender men and others who were assigned female at birth#relevant for people in relationships where someone could get pregnant#relevant for people who menstruate#original post
60 notes
·
View notes
Text
The gustatory system, a marvel of biological evolution, is a testament to the intricate balance between speed and precision. In stark contrast, the tech industry’s mantra of “move fast and break things” has proven to be a perilous approach, particularly in the realm of artificial intelligence. This philosophy, borrowed from the early days of Silicon Valley, is ill-suited for AI, where the stakes are exponentially higher.
AI systems, much like the gustatory receptors, require a meticulous calibration to function optimally. The gustatory system processes complex chemical signals with remarkable accuracy, ensuring that organisms can discern between nourishment and poison. Similarly, AI must be developed with a focus on precision and reliability, as its applications permeate critical sectors such as healthcare, finance, and autonomous vehicles.
The reckless pace of development, akin to a poorly trained neural network, can lead to catastrophic failures. Consider the gustatory system’s reliance on a finely tuned balance of taste receptors. An imbalance could result in a misinterpretation of flavors, leading to detrimental consequences. In AI, a similar imbalance can manifest as biased algorithms or erroneous decision-making processes, with far-reaching implications.
To avoid these pitfalls, AI development must adopt a paradigm shift towards robustness and ethical considerations. This involves implementing rigorous testing protocols, akin to the biological processes that ensure the fidelity of taste perception. Just as the gustatory system employs feedback mechanisms to refine its accuracy, AI systems must incorporate continuous learning and validation to adapt to new data without compromising integrity.
Furthermore, interdisciplinary collaboration is paramount. The gustatory system’s efficiency is a product of evolutionary synergy between biology and chemistry. In AI, a collaborative approach involving ethicists, domain experts, and technologists can foster a holistic development environment. This ensures that AI systems are not only technically sound but also socially responsible.
In conclusion, the “move fast and break things” ethos is a relic of a bygone era, unsuitable for the nuanced and high-stakes world of AI. By drawing inspiration from the gustatory system’s balance of speed and precision, we can chart a course for AI development that prioritizes safety, accuracy, and ethical integrity. The future of AI hinges on our ability to learn from nature’s time-tested systems, ensuring that we build technologies that enhance, rather than endanger, our world.
#gustatory#AI#skeptic#skepticism#artificial intelligence#general intelligence#generative artificial intelligence#genai#thinking machines#safe AI#friendly AI#unfriendly AI#superintelligence#singularity#intelligence explosion#bias
3 notes
·
View notes
Text
hello dashster, i am considering slowly making my way back here bc i miss you ! i have one tiny thing to say before i go back to my regular posting though — now this could mean 2 things, but fyi we all hate ai in this house
#— ai rambles#this is my hello but also my last response#next time warn your friends to not fuck you over like that and keep their mocking in your dms : )#i would’ve never reacted the way i did if it wasn’t for that absurd comment : )#i like to think anyone in my shoes would’ve been as mad considering everything else#and all the plagiarism done by said user . pls do not use ai (as in gojoest) for your writing#hashtag LMFAO#anyway this makes a good inside joke now#thank you i guess#and HENLOOOOO#your friendly menace is back
42 notes
·
View notes
Text
Useful to remember: "AI" and "real photo" aren't a dichotomy. There's still good old-fashioned, artisanal free range fake shit out there, of both the photo-edited and in-camera varieties!
233 notes
·
View notes
Text
The absolute irony
Nothing say’s eco friendly like an ai generated image posted as an ad touting  the benefits of an eco friendly lifestyle
47 notes
·
View notes
Text
if you want to join a fandom exchange and make/request ai-generated content then i genuinely think you should leave fandom
#exchanges are literally about the joy of creation within fandom. and you want to use a robot for it?? die#the audacity to argue about it and saying that ai content being against the rules means the exchange isnt ‘all fandom friendly’#bitch is ai a fandom. no. it’s a tool. and if you’re so against creating something with your own time and hands#then literally just dont join the exchange. i’m going to start biting and maiming
27 notes
·
View notes
Text
Anyone else do this or just me Like I just don't simp, but I yearn deeply for a positive platonic relationship. I am not mentally well.
#artists on tumblr#digital art#art#my art#bone creature#my persona#my oc#murder drones#murder drones n#sundrop fnaf#sundrop dca#tsams bloodmoon#bennet genshin impact#batim bendy#if I had a nickle#for everytime I felt a strong platonic attachment to a friendly yellow robot#who worked around kids and got possessed by corrupted AI that forced them to kill people including said kid#i would have 2 nickels
26 notes
·
View notes
Text
[ID: A rectangular flag with 5 even horizontal stripes, colored from top to bottom; very dark red, dark red, grey off-white, brown, and dark green. end ID]
Invilfaicus
Where one's identity is, or is best described by, the image below;

[ID: a zoomed in image of a circus tent's ceiling with two visible beams, fairy lights, and triangle shaped paper string decorations with the generic white and red tent design. /end ID]
Coined on 2/22/2025 | Colors picked from the image | For Day 4 of Dwllie's 500 event (link); Circus
(Taglist) @radiomogai @obscurian & @dwllie
Image ID by Mod 🌆 of @/accessibilitea
#✦ coining#invilfaicus#inviane#mogai friendly#mogai coining#mogai flag#my terms#my flags#mogai#pemogai#dwllie500#i had planned on doing something else#but i found out that image was ai#rage and anger
20 notes
·
View notes
Text
What can you do if your professors require you to use generative AI for your assignments?
Universities expel students for plagiarism, so it's bizarre that some professors allow, encourage, or even require students to use chatGPT or other genAI in their assignments. AI generation has no place in a school beyond showing students how to recognize it and why they should not use it. Requiring students to use genAI to write essays for them is cheating students out of their money by failing to teach them the thinking, brainstorming, editing, and writing skills that they pay to learn. Keep in mind that I am not a lawyer and this is not legal advice. For legal advice, you must consult with your own lawyer.
First, immediately ask your university's authority, the Dean, whether they approve of how those professors are using AI generation in classes. Just politely ask, don't confront or threaten, but explain why you think this is wrong.
If the Dean thinks this situation is fine, then this university isn't worth your time or money. If it's not teaching you how to do things or think for yourself, then it's not education, it's a pointless waste.
Become a whistleblower only if you're up for a fight to make the school better for others. Whistleblowing has many risks, including that it may make this or other universities dislike working with you personally.
Don't do it alone! Organize with other students who agree with you so that you can speak up about this together. Collect signatures. Show that you're not the odd one out for caring. People are more likely to listen if they see that a wide range of students from different backgrounds share the view that this is no good.
Carefully save all emails, handouts, or other written documents from your professors where they said how they required you to use AI generation. Save evidence.
Get a lawyer. Some lawyers are favorable to genAI, but others recognize that the abuse of AI-generated texts is a nuisance to many professions, and to their own in particular.
Together, complain to the Dean. Let your Dean know that you all find it unacceptable that the university approves of using AI generation in classes as described, and if it continues, then you will take this to a higher level.
Your next step may be to complain to whatever organizations the university is overseen by or holds membership in. These may decide that the university deserves to lose its reputation or even its accreditation.
Or your next step may be to complain to the press. As professional writers whose livelihoods are threatened by ChatGPT, journalists will share your outrage at your university.
These professors are such an insult to pedagogy that they're inviting a dark age of ignorance for the next generation. It's unfortunate that students today find themselves in a necessary fight against this. Godspeed.
#I originally wrote this as a reply to someone else's post and later i decided to adapt a copy of it to be its own post#anti generative AI#anti genAI#anti AI#anti-AI#rated G#education#screen reader friendly#no-AI#queue
23 notes
·
View notes
Text
The Illusion of Omnipotence: AI’s Antithetical Nature
Artificial Intelligence, often heralded as the harbinger of a new technological era, is not without its intrinsic flaws. At its core, AI operates on algorithms that are inherently antithetical to the concept of compromise. This fundamental characteristic poses a significant challenge to its integration into systems that require nuanced decision-making and ethical considerations.
AI systems are designed to optimize. They are built upon objective functions that drive them towards a singular goal, often at the expense of alternative perspectives. This optimization process is akin to a laser-focused beam, unwavering and unyielding, cutting through the complexity of data to reach a predetermined endpoint. However, this precision is also its Achilles’ heel. In scenarios where compromise is essential, such as in socio-political contexts or ethical dilemmas, AI’s rigid adherence to its programming becomes a liability.
The architecture of AI, particularly in machine learning models, is predicated on vast datasets and pattern recognition. These models, while sophisticated, lack the capacity for moral reasoning or empathy. They operate within a binary framework, where decisions are distilled into a series of if-then statements. This reductionist approach is antithetical to the fluid and often contradictory nature of human decision-making, which thrives on compromise and negotiation.
Defending against AI’s uncompromising nature requires a multi-faceted approach. One must first acknowledge the limitations of AI and resist the temptation to anthropomorphize its capabilities. AI is not a sentient entity; it is a tool, and like any tool, it must be wielded with caution and oversight. Implementing robust ethical guidelines and regulatory frameworks is paramount. These frameworks should mandate transparency in AI decision-making processes, ensuring that the rationale behind AI-driven outcomes is accessible and understandable to human operators.
Furthermore, the integration of adversarial training techniques can serve as a bulwark against AI’s antithetical tendencies. By exposing AI systems to scenarios where compromise is necessary, developers can instill a degree of flexibility within the algorithmic structure. This approach is akin to introducing a controlled chaos into the system, forcing it to adapt and evolve beyond its initial programming.
In conclusion, while AI holds the potential to revolutionize industries, its antithetical nature to compromise cannot be overlooked. As we continue to integrate AI into the fabric of society, it is imperative that we remain vigilant, ensuring that these systems are designed and deployed with an acute awareness of their limitations. Only through a concerted effort to balance AI’s capabilities with human oversight can we hope to harness its power responsibly.
#antithetical#AI#skeptic#skepticism#artificial intelligence#general intelligence#generative artificial intelligence#genai#thinking machines#safe AI#friendly AI#unfriendly AI#superintelligence#singularity#intelligence explosion#bias
0 notes