#Creating Human level AI
Explore tagged Tumblr posts
vt-scribbles · 1 year ago
Text
Something seriously lacking in my art is the ability to tell a story in a single illustration.
I've gotten so used to drawing my characters standing around doing random things that I've never practiced telling a full tale/putting implications into my pieces that require more thinking/looking.
It also comes from a lower amount of details in my works by default [since I like to get pieces done fast], but I'm tired of using that as an excuse.
9 notes · View notes
toughtink · 2 years ago
Text
the problem with ai generated stuff isn’t that it’s a “collage” of other stuff, it’s that these models have largely been built on stolen data without any permission, often for the express purpose of copying the work of specific people. computers cannot and do not experience art of any form like humans do; it can only scrape raw data and then use predictive models to regurgitate it. and because the data sets are so large and completely unregulated, prompters and the companies who own these models cannot guarantee that anything generated isn’t plagiarized pixel-for-pixel or word-for-word. the licensing and use of any data for these models must be treated as something novel and separate to the current rules around rights/transformative works, that individuals can opt into instead of jumping through hoops to attempt to opt out of. this goes for visual art, writing, voice work, and music.
23 notes · View notes
aggressiveguitarnoises · 1 year ago
Text
the more i find out about AI, the more im scared to fully turn to digital art as a career
5 notes · View notes
gobhoblingreg · 8 months ago
Text
Another thing that concerns me about this is the fact that somebody could repost your art from one site, like tumblr, onto X/Twitter. Even without the artist themselves being on X/Twitter, their art or intellectual property would likely still be used because this amendment to their services doesn’t specify that you have to be the original creator in order for them to take and use your work. It basically implies that anything and everything that gets posted there can be used for these purposes regardless of the user who posts the content. So while leaving the platform might lessen the chances of your work getting used for these purposes, it only takes one person reposting to become victim to them anyways.
I can also imagine that if other social media platforms aren’t already doing this to some extent, they will likely follow suit in the near future.
Tumblr media
In case any of you here also use X/Twitter.
38K notes · View notes
nauticalfools · 8 days ago
Text
.
0 notes
theleanbean · 2 months ago
Text
Level 5 isn't making me very enthusiastic about their new Layton game
0 notes
king-goo · 2 months ago
Text
“oh AI makes writing accessible to everyone what if you can’t afford to take a class to learn how to write well” pick up a book dumbass
0 notes
crocronutart · 7 months ago
Text
Hey nerds,
Are you a fan of animation? Are you an artist that works in the animation industry (or at least wants to… one of these days)? Do you just enjoy silly cartoons? Or maybe you don’t care about cartoons, but you believe strongly in workers rights and fair wages!
Well the people who make your favorite cartoons, anime, and animated shows and movies are fighting for the future of this industry. We are facing a lot of challenges right now job creep, low wages, the ever looming threat of artists being replaced by AI to create worse animation for the sake of generating content to make lots of money for very rich people while real human artists live paycheck to paycheck and struggle to pay rent and feed their families…
But you can help! By signing this petition to show your support for animation workers in the US
6K notes · View notes
horreurscopes · 1 year ago
Text
regardless of a myriad of other AI discourse talking points i'm not touching with a 10 ft pole, i think it should always be disclosed when i'm looking at something AI generated. like that's a basic level of societal courtesy, right. AI images more than any new technology that has changed the course of humanity seem to be inseparable from a purposeful obfuscation of their origin. the gimmick is to deceive human perception, their entire purpose is to make you believe you are looking at something created by sentience. AI is at its core a tool for deception and i mean that as a neutral statement. it's a mimic, a pantomime. impersonation. and that is, ethics aside, annoying as all fuck
9K notes · View notes
arrangedaccident · 2 months ago
Text
love this emerging genre of doctor who episode where you think it’s a surface level commentary on one topic (social media; ai), but then the rug gets pulled out from underneath you and it’s actually a more severe social commentary (racism; misogyny/incel culture), BUT THEN it’s actually a pretty adept illustration of how the first and second topics are part of the same problem (social media allows people to create echo chambers which reinforce racism and fear mongering to the point of denying their own lived reality; ai masks human biases as technological progress and exacerbates regressive movements like incel culture)
2K notes · View notes
probablyasocialecologist · 10 months ago
Text
Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found. Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence. The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context. Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all. These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.  Human summaries ran up the score by significantly outperforming on identifying references to ASIC documents in the long document, a type of task that the report notes is a “notoriously hard task” for this type of AI. But humans still beat the technology across the board. Reviewers told the report’s authors that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information. Three of the five reviewers said they guessed that they were reviewing AI content. The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions which communicated the message better and more concisely. 
3 September 2024
5K notes · View notes
patricia-taxxon · 6 months ago
Note
So wait, let me just ask for clarity because I want to understand. Do you support AI art?
i support art made with spontaneous and hands-off processes, i support the creation of art tools that are more art than tool & allow people to "participate" in someone else's creation vicariously a-la picrew, i don't support the institution of "AI" as a consumer grade technology industry that promises impossible things and prioritizes appearances and marketability over usability, i believe that if "AI" allowed people to siphon images directly from their brain with no effort required then it would be a good thing but I believe this is fundamentally impossible until we figure out how to read minds and the focus on arguing for or against accessibility is missing the point, i believe AI art can only ever be a pale imitation of the process of commissioning an artist who can't ever ask questions and cannot be trusted with object permanence, I believe copyright law is a head on the hydra of capitalism and doesn't serve artists, i believe that AI art isn't necessarily art theft but it CAN overfit to its data and create illegal works without telling you, which constitutes criminal levels of negligence, I believe all art is derivative in some way and some of the most seminal art made in this era of history has been far more dubiously infringing than AI art ever can be because AI art does not steal in the way a human does, I think the focus on energy consumption is transparently just a post-hoc justification for hating the thing you all already hated under the guise of environmentalism because it is a problem far from unique to AI, I think the focus on environmentalism was a distraction at best during the NFT craze too, i don't think AI art takes artists out of a job any more than stock photos or clipart does, but the proliferation of consumer-grade tools DOES run the risk of engendering bad client practices similar to the rise of machine translation and asking translators to simply "fix" a machine translated run of text at a marked down price, but this is not the fault of the technology itself and is instead a result of the ideological push being made by the biggest actors in the industry, i think AI art is ugly as sin and carries the pervasive quality of looking normal at a glance but getting worse and worse the longer you look at it, which can be interesting but often isn't, i think ai art is shit google images and the controversy is overblown but I think machine learning is here to stay and it will inevitably decentralize again after the immense costs catch up to all the corpos relying on it to win the future.
so like, yes and no.
3K notes · View notes
phantomrose96 · 1 year ago
Text
The conversation around AI is going to get away from us quickly because people lack the language to distinguish types of AI--and it's not their fault. Companies love to slap "AI" on anything they believe can pass for something "intelligent" a computer program is doing. And this muddies the waters when people want to talk about AI when the exact same word covers a wide umbrella and they themselves don't know how to qualify the distinctions within.
I'm a software engineer and not a data scientist, so I'm not exactly at the level of domain expert. But I work with data scientists, and I have at least rudimentary college-level knowledge of machine learning and linear algebra from my CS degree. So I want to give some quick guidance.
What is AI? And what is not AI?
So what's the difference between just a computer program, and an "AI" program? Computers can do a lot of smart things, and companies love the idea of calling anything that seems smart enough "AI", but industry-wise the question of "how smart" a program is has nothing to do with whether it is AI.
A regular, non-AI computer program is procedural, and rigidly defined. I could "program" traffic light behavior that essentially goes { if(light === green) { go(); } else { stop();} }. I've told it in simple and rigid terms what condition to check, and how to behave based on that check. (A better program would have a lot more to check for, like signs and road conditions and pedestrians in the street, and those things will still need to be spelled out.)
An AI traffic light behavior is generated by machine-learning, which simplistically is a huge cranking machine of linear algebra which you feed training data into and it "learns" from. By "learning" I mean it's developing a complex and opaque model of parameters to fit the training data (but not over-fit). In this case the training data probably includes thousands of videos of car behavior at traffic intersections. Through parameter tweaking and model adjustment, data scientists will turn this crank over and over adjusting it to create something which, in very opaque terms, has developed a model that will guess the right behavioral output for any future scenario.
A well-trained model would be fed a green light and know to go, and a red light and know to stop, and 'green but there's a kid in the road' and know to stop. A very very well-trained model can probably do this better than my program above, because it has the capacity to be more adaptive than my rigidly-defined thing if the rigidly-defined program is missing some considerations. But if the AI model makes a wrong choice, it is significantly harder to trace down why exactly it did that.
Because again, the reason it's making this decision may be very opaque. It's like engineering a very specific plinko machine which gets tweaked to be very good at taking a road input and giving the right output. But like if that plinko machine contained millions of pegs and none of them necessarily correlated to anything to do with the road. There's possibly no "if green, go, else stop" to look for. (Maybe there is, for traffic light specifically as that is intentionally very simplistic. But a model trained to recognize written numbers for example likely contains no parameters at all that you could map to ideas a human has like "look for a rigid line in the number". The parameters may be all, to humans, meaningless.)
So, that's basics. Here are some categories of things which get called AI:
"AI" which is just genuinely not AI
There's plenty of software that follows a normal, procedural program defined rigidly, with no linear algebra model training, that companies would love to brand as "AI" because it sounds cool.
Something like motion detection/tracking might be sold as artificially intelligent. But under the covers that can be done as simply as "if some range of pixels changes color by a certain amount, flag as motion"
2. AI which IS genuinely AI, but is not the kind of AI everyone is talking about right now
"AI", by which I mean machine learning using linear algebra, is very good at being fed a lot of training data, and then coming up with an ability to go and categorize real information.
The AI technology that looks at cells and determines whether they're cancer or not, that is using this technology. OCR (Optical Character Recognition) is the technology that can take an image of hand-written text and transcribe it. Again, it's using linear algebra, so yes it's AI.
Many other such examples exist, and have been around for quite a good number of years. They share the genre of technology, which is machine learning models, but these are not the Large Language Model Generative AI that is all over the media. Criticizing these would be like criticizing airplanes when you're actually mad at military drones. It's the same "makes fly in the air" technology but their impact is very different.
3. The AI we ARE talking about. "Chat-gpt" type of Generative AI which uses LLMs ("Large Language Models")
If there was one word I wish people would know in all this, it's LLM (Large Language Model). This describes the KIND of machine learning model that Chat-GPT/midjourney/stablediffusion are fueled by. They're so extremely powerfully trained on human language that they can take an input of conversational language and create a predictive output that is human coherent. (I am less certain what additional technology fuels art-creation, specifically, but considering the AI art generation has risen hand-in-hand with the advent of powerful LLM, I'm at least confident in saying it is still corely LLM).
This technology isn't exactly brand new (predictive text has been using it, but more like the mostly innocent and much less successful older sibling of some celebrity, who no one really thinks about.) But the scale and power of LLM-based AI technology is what is new with Chat-GPT.
This is the generative AI, and even better, the large language model generative AI.
(Data scientists, feel free to add on or correct anything.)
3K notes · View notes
ellipsus-writes · 3 months ago
Text
Tumblr media
Ellipsus Digest: March 18
Each week (or so), we'll highlight the relevant (and sometimes rage-inducing) news adjacent to writing and freedom of expression.
This week: AI continues its hostile takeover of creative labor, Spain takes a stand against digital sludge, and the usual suspects in the U.S. are hard at work memory-holing reality in ways both dystopian and deeply unserious.
ChatGPT firm reveals AI model that is “good at creative writing” (The Guardian)
... Those quotes are working hard.
OpenAI (ChatGPT) announced a new AI model trained to emulate creative writing—at least, according to founder Sam Altman: “This is the first time i have been really struck by something written by AI.” But with growing concerns over unethically scraped training data and the continued dilution of human voices, writers are asking… why? 
Spoiler: the result is yet another model that mimics the aesthetics of creativity while replacing the act of creation with something that exists primarily to generate profit for OpenAI and its (many) partners—at the expense of authors whose work has been chewed up, swallowed, and regurgitated into Silicon Valley slop.
Spain to impose massive fines for not labeling AI-generated content (Reuters)
But while big tech continues to accelerate AI’s encroachment on creative industries, Spain (in stark contrast to the U.S.) has drawn a line: In an attempt to curb misinformation and protect human labor, all AI-generated content must be labeled, or companies will face massive fines. As the internet is flooded with AI-written text and AI-generated art, the bill could be the first of many attempts to curb the unchecked spread of slop.
Besos, España 💋
These words are disappearing in the new Trump administration (NYT)
Project 2025 is moving right along—alongside dismantling policies and purging government employees, the stage is set for a systemic erasure of language (and reality). Reports show that officials plan to wipe government websites of references to LGBTQ+, BIPOC, women, and other communities—words like minority, gender, Black, racism, victim, sexuality, climate crisis, discrimination, and women have been flagged, alongside resources for marginalized groups and DEI initiatives, for removal.
It’s a concentrated effort at creating an infrastructure where discrimination becomes easier… because the words to fight it no longer officially exist. (Federally funded educational institutions, research grants, and historical archives will continue to be affected—a broader, more insidious continuation of book bans, but at the level of national record-keeping, reflective of reality.) Doubleplusungood, indeed.
Pete Hegseth’s banned images of “Enola Gay” plane in DEI crackdown (The Daily Beast)
Fox News pundit-turned-Secretary of Defense-slash-perpetual-drunk-uncle Pete Hegseth has a new target: banning educational materials featuring the Enola Gay, the plane that dropped the atomic bomb on Hiroshima. His reasoning: that its inclusion in DEI programs constitutes "woke revisionism." If a nuke isn’t safe from censorship, what is?
The data hoarders resisting Trump’s purge (The New Yorker)
Things are a little shit, sure. But even in the ungoodest of times, there are people unwilling to go down without a fight.
Archivists, librarians, and internet people are bracing for the widespread censorship of government records and content. With the Trump admin aiming to erase documentation of progressive policies and minority protections, a decentralized network is working to preserve at-risk information in a galvanized push against erasure, refusing to let silence win.
Let us know if you find something other writers should know about, (or join our Discord and share it there!) Until next week, - The Ellipsus Team xo
Tumblr media
619 notes · View notes
madscientist14159 · 3 days ago
Text
TADC Theory
I’ve seen people saying that the “vegan Jax” bit proves that Jax is an NPC, since Caine has said that he can’t access human minds.
But Gooseworx has told us that Jax isn’t an NPC.
(Also, we’ve seen Ragatha’s mind be altered by the stupid sauce, and Gangle’s by the mania mask. They’d need to be NPCs too, if this theory were true)
So Caine is lying.
But I think it might go further than that. I have a suspicion that there are no NPCs.
The theory is based on the idea that Caine either can’t or doesn’t create sentient beings. Instead, he takes the abstracted humans in the basement, and uses them as raw materials to make adventure characters out of.
There’s clearly some kind of narrative arc going on with Pomni and Gummigoo, and it doesn’t feel finished. I think she’s going to find a door with Gummigoo’s face on it, crossed out, and that’s going to set off the revelation that he’s not fake at all. He’s just a trapped human from before the current group’s time, who lost his mind, spent decades in the basement, and then one day Caine fished him back out, scrambled his brains around, and dropped him in the Candy Land adventure.
It might explain why Caine is afraid of losing track of who’s an NPC and who isn’t; at the fundamental level there is no way to distinguish between them because they’re not different groups.
All of which, if true, implies that the narrative climax will involve a reunion with Kaufmo and Queenie.
Flaws in this theory/Notes:
The Evil Big Tops just kinda looked like that. Caine reusing character designs for the current group? Although if Gangle’s a newer design then it’d explain why she lacked an evil version.
Whole ton of artist’s reference dolls around. That’s a lot of trapped humans. The entire C&A employee workforce?
What exactly is Bubble? A brainwashed human? An extension of the AI? A genuine NPC?
Could be a mix of humans and NPCs, but that feels messier.
279 notes · View notes
reality-detective · 1 month ago
Text
Tumblr media
THIS IS THE STORM — OPERATION LIBERTY SHIELD UNLEASHED
The silence has shattered. The war is no longer hidden. On May 10, 2025, the full force of Trump’s restored military alliance launched Operation Liberty Shield — a classified global takedown targeting the heart of an elite child trafficking and human experimentation network that spans continents, corporations, and crowned bloodlines. This is not a sting. This is an extinction-level purge. Over 20,000 elite forces — SEALs, Marines, Delta, and global white hats — are storming underground strongholds once believed untouchable. The goal is simple: annihilate the infrastructure of enslavement, expose the handlers, and rescue every last stolen soul.
Nevada. Alaska. Rome. Antarctica. Tunnels that were once Cold War secrets are now battlegrounds. SEAL units uncovered thousands of children locked in cages beneath camouflaged mining sites and AI-operated labs. Evidence of MK-Ultra abuse, hormonal harvesting, and genetic weaponization has been retrieved — all tied to biotech firms, fake NGOs, and even Area 51. These were not experiments. These were rituals. Each child was a data point in a demonic system designed to feed the beast and blackmail the world. From the Vatican to Silicon Valley, the currency was always the same: human lives.
Digital forensics teams under Space Force command have decrypted petabytes of dark web data — exposing blockchain-funded trafficking routes masked as "development grants." Names once praised as philanthropists are now exposed as financiers of evil. Zuckerberg, Bezos, and Gates are directly tied to AI-managed procurement contracts and smart-chain auctions. Military raids on media hubs have confirmed "Operation Obscura" — a coordinated propaganda system created to bury these operations, discredit Trump, and destroy whistleblowers before truth could reach the surface.
Now it’s all unraveling. Gitmo is overflowing. Military tribunals are active. Blackmail files once used to enslave nations are being burned. Trump’s alliance is not just winning — it is rewriting history.
The storm is no longer a warning. It is here. It is righteous. And it will be remembered forever. Stay alert. Stay grounded. The final act has begun.
I can't make you understand or believe me, but this whole thing has been about saving the children and then to clean up the top three branches of the government. This is happening in every country NOT just in the United States. You Decide 🤔
227 notes · View notes