#AI language learning
Explore tagged Tumblr posts
reviewtechnology · 9 months ago
Text
Are you struggling with your Audio Projects? - Voisi AI Elite
Do you like to create multi-voice, people conversations? You can create conversational stories, podcasts and dramas include text to audio in all major languages using the Voisi product.
You can do several voice related works include voice cloning, translations, voice to text, different voices etc. You can use all major AI tools withs the software include Amazon, Microsoft, IBM, Open Ai and many more.
If you have below problems in your projects then Voisi AI Elite product is the best to use:
AI voice cloning, Voice-over software, Language translation AI, Voice-to-text software, Creative content tools, Multilingual AI, Audio production AI, Voice synthesis, AI language learning, Podcast creation AI, Voice cloning technology, AI voice generator, Multilingual voiceovers, Creative workflow AI, AI-powered narration, Voice AI tools etc.
Learn More About Products And How to use the software Here
Tumblr media
Disclaimer: This article may contain affiliate links, which means I may earn a small commission at no extra cost to you.
1 note · View note
victusinveritas · 10 days ago
Text
Tumblr media Tumblr media
I am SHOCKED that making Grok "unwoke" literally turned it into, in its own words, Mechahitler.
2K notes · View notes
river-taxbird · 11 months ago
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes · View notes
reasonsforhope · 8 months ago
Text
"As a Deaf man, Adam Munder has long been advocating for communication rights in a world that chiefly caters to hearing people. 
The Intel software engineer and his wife — who is also Deaf — are often unable to use American Sign Language in daily interactions, instead defaulting to texting on a smartphone or passing a pen and paper back and forth with service workers, teachers, and lawyers. 
It can make simple tasks, like ordering coffee, more complicated than it should be. 
But there are life events that hold greater weight than a cup of coffee. 
Recently, Munder and his wife took their daughter in for a doctor’s appointment — and no interpreter was available. 
To their surprise, their doctor said: “It’s alright, we’ll just have your daughter interpret for you!” ...
That day at the doctor’s office came at the heels of a thousand frustrating interactions and miscommunications — and Munder is not isolated in his experience.
“Where I live in Arizona, there are more than 1.1 million individuals with a hearing loss,” Munder said, “and only about 400 licensed interpreters.”
In addition to being hard to find, interpreters are expensive. And texting and writing aren’t always practical options — they leave out the emotion, detail, and nuance of a spoken conversation. 
ASL is a rich, complex language with its own grammar and culture; a subtle change in speed, direction, facial expression, or gesture can completely change the meaning and tone of a sign. 
“Writing back and forth on paper and pen or using a smartphone to text is not equivalent to American Sign Language,” Munder emphasized. “The details and nuance that make us human are lost in both our personal and business conversations.”
His solution? An AI-powered platform called Omnibridge. 
“My team has established this bridge between the Deaf world and the hearing world, bringing these worlds together without forcing one to adapt to the other,” Munder said. 
Trained on thousands of signs, Omnibridge is engineered to transcribe spoken English and interpret sign language on screen in seconds...
“Our dream is that the technology will be available to everyone, everywhere,” Munder said. “I feel like three to four years from now, we're going to have an app on a phone. Our team has already started working on a cloud-based product, and we're hoping that will be an easy switch from cloud to mobile to an app.” ...
At its heart, Omnibridge is a testament to the positive capabilities of artificial intelligence. "
-via GoodGoodGood, October 25, 2024. More info below the cut!
To test an alpha version of his invention, Munder welcomed TED associate Hasiba Haq on stage. 
“I want to show you how this could have changed my interaction at the doctor appointment, had this been available,” Munder said. 
He went on to explain that the software would generate a bi-directional conversation, in which Munder’s signs would appear as blue text and spoken word would appear in gray. 
At first, there was a brief hiccup on the TED stage. Haq, who was standing in as the doctor’s office receptionist, spoke — but the screen remained blank. 
“I don’t believe this; this is the first time that AI has ever failed,” Munder joked, getting a big laugh from the crowd. “Thanks for your patience.”
After a quick reboot, they rolled with the punches and tried again.
Haq asked: “Hi, how’s it going?” 
Her words popped up in blue. 
Munder signed in reply: “I am good.” 
His response popped up in gray. 
Back and forth, they recreated the scene from the doctor’s office. But this time Munder retained his autonomy, and no one suggested a 7-year-old should play interpreter. 
Munder’s TED debut and tech demonstration didn’t happen overnight — the engineer has been working on Omnibridge for over a decade. 
“It takes a lot to build something like this,” Munder told Good Good Good in an exclusive interview, communicating with our team in ASL. “It couldn't just be one or two people. It takes a large team, a lot of resources, millions and millions of dollars to work on a project like this.” 
After five years of pitching and research, Intel handpicked Munder’s team for a specialty training program. It was through that backing that Omnibridge began to truly take shape...
“Our dream is that the technology will be available to everyone, everywhere,” Munder said. “I feel like three to four years from now, we're going to have an app on a phone. Our team has already started working on a cloud-based product, and we're hoping that will be an easy switch from cloud to mobile to an app.” 
In order to achieve that dream — of transposing their technology to a smartphone — Munder and his team have to play a bit of a waiting game. Today, their platform necessitates building the technology on a PC, with an AI engine. 
“A lot of things don't have those AI PC types of chips,” Munder explained. “But as the technology evolves, we expect that smartphones will start to include AI engines. They'll start to include the capability in processing within smartphones. It will take time for the technology to catch up to it, and it probably won't need the power that we're requiring right now on a PC.” 
At its heart, Omnibridge is a testament to the positive capabilities of artificial intelligence. 
But it is more than a transcription service — it allows people to have face-to-face conversations with each other. There’s a world of difference between passing around a phone or pen and paper and looking someone in the eyes when you speak to them. 
It also allows Deaf people to speak ASL directly, without doing the mental gymnastics of translating their words into English.
“For me, English is my second language,” Munder told Good Good Good. “So when I write in English, I have to think: How am I going to adjust the words? How am I going to write it just right so somebody can understand me? It takes me some time and effort, and it's hard for me to express myself actually in doing that. This technology allows someone to be able to express themselves in their native language.” 
Ultimately, Munder said that Omnibridge is about “bringing humanity back” to these conversations. 
“We’re changing the world through the power of AI, not just revolutionizing technology, but enhancing that human connection,” Munder said at the end of his TED Talk. 
“It’s two languages,” he concluded, “signed and spoken, in one seamless conversation.”"
-via GoodGoodGood, October 25, 2024
532 notes · View notes
ineed-to-sleep · 22 days ago
Text
Ran into another post about the disney-midjourney lawsuit discourse and tbh it baffles me every time. You guys Do know it's already illegal to sell fanart, right? You know that the lawsuit isn't calling for expansion of copyright law and disney doesn't need to expand it in order to win, right? You know disney is only *really* suing midjourney because it has a subscription option(profit) and has the capacity to mass produce copyrighted work(scale), and the interest disney has in this is entirely money based, and they won't suddenly see a monetary benefit to be gained from suing small artists after this(who neither make enough of a profit nor produce their work in a large enough scale to become a real competitor for disney), right? You know making money off of copyrighted work that's not yours or that you don't have a license for hasn't been protected by the law for a really long time and we make it despite this because we know it's very unlikely to give us trouble, right? Right guys? Right???? You know your rights, don't you guys?????? Guys????????????????
135 notes · View notes
mintjeru · 1 year ago
Text
Tumblr media
hot girl summer 🔥
open for better quality | no reposts
330 notes · View notes
justalittlesolarpunk · 2 months ago
Text
Solarpunks, I need your help! I just discovered that the language learning app I downloaded as an alternative to Duolingo’s AI slop in fact also uses AI! 🤬🤬🤬
Pls, can anyone recommend a language app that actually helps you with fluency and *doesn’t* use planet-wrecking work-stealing technology?
54 notes · View notes
hotwaterandmilk · 7 months ago
Text
Tumblr media
I don't know why but I find these brief profiles for the love angels in Shougaku Ichinensei 11/1994 really cute. Illustrations above by Kirishima Sent.
-
Wedding Peach / Hanasaki Momoko
Born March 3rd. Blood type 0. Cheerful, but a little clumsy… A girl with angelic blood who fights devils using the power of love.
Angel Lily / Tanima Yuri
Born July 7th. Blood type A. Gentle, good at fortune telling. A girl with angelic blood who fights devils using the power of intelligence.
Angel Daisy / Tamano Hinagiku
Born May 5th. Blood type B. Tomboyish, good at fighting. A girl with angelic blood who fights devils using the power of courage.
-
While the girls are still described as angels, their profiles state that they each combat the devils using different "powers". Love for Peach, intelligence and courage for Lily and Daisy respectively.
I also find it interesting that Yuri is described as having a talent for fortune-telling in this profile. In the Secret File art book section discussing the original setting ideas for the series, Hinagiku was planned to have precognition/clairvoyance talents. However, none of the heroines retained this as a primary character trait in the finalised anime or Ciao manga (save for Yuri suggesting the missing bride's location in episode 9 of the anime).
While courage stayed as a core element of Daisy's power and persona in both anime and manga (Hinagiku performs her oironaoshi by calling "Angel Courage Daisy"), Lily's intelligence didn't become amalgamated in the same way. Interestingly though, the hint at psychic ability did remain in the oironaoshi call "Angel Prescience Lily".
This is what I love love love about older media mix titles like Wedding Peach. There are just so many changes across the different adaptations that I'm still finding out new things as I gain access to various old magazines (which is hard because ugh, they're SO expensive). And there really are a lot of different versions of the Wedding Peach story to read in print alone:
Tumblr media
If I had a bunch of spare cash on hand right now for fandom purposes I'd be buying more magazines to scan/share and paying someone to do a proper translation of the Secret File book (because I really can't get the nuance right with the interviews). Oh well, I can dream!
93 notes · View notes
akaessi · 5 months ago
Text
Duolingo's annoying and outlandish marketing scheme is supposed to distract you from the fact that they are routinely utilizing AI to structure/moderate/and otherwise create language lessons.
For years, language experts and learners have been requesting that the app include languages such as Icelandic and other languages with relatively low populations of native speakers. additionally, while Duolingo has been credited with "playing a key role in preserving indigenous languages," they have yet to fulfill their promises of adding additional at-risk languages. Specifically,  Yucatec and K’iche, which the app faced "setbacks for." Even worse, in my opinion, is the fact that they are utilizing AI to create language courses in Navajo and Hawaiian.
The ethics of using AI to model and create indigenous languages cannot be ignored. What are their systems siphoning from? Language revitalization without a community being involved and credited is language theft and colonization. (I can't even get into the environmental impact of AI).
Instead of working with more language experts, hiring linguists, and spending more on their language programs, more and more money is being poured into their marketing. While they have a heavy team of computational and theoretical linguists, there seem to be fewer and fewer language experts and social linguists involved.
Their research section has not had a publication listed since 2021. Another research site Duolingo hosts on the efficacy of Duolingo has publications as recently as 2024, but only a total of 5 publications (2021-2024) listed were peer-reviewed and only 2 additional publications were independent research reports (2022 & 2023). The remaining 9 publications were Duolingo internal research reports. So, while a major marketing feature of the app is the "science backed, researched based, approach" there is much to be desired from their research setting. Additionally, the manner on how they personally determine efficacy in their own reports, as written in this blog post, has an insufficient dataset.
And while they openly share their datasets derived from Duolingo users, there are no clear bibliographies for individual language courses. What datasets are their curriculum creators using? And what curriculum creators do they even have left considering their massive layoffs of their translations team (10%) and the remaining translators being tasked with editing AI content?
Duo can be run over by a goddamn cybertruck but god forbid the app actually spend any money on the language programs you're playing with.
49 notes · View notes
victusinveritas · 29 days ago
Text
Tumblr media Tumblr media
From Rebecca Solnit:
When you outsource thinking, your brain goes on vacation. "EEG analysis presented robust evidence that LLM, Search Engine and Brain-only groups had significantly different neural connectivity patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."
https://arxiv.org/pdf/2506.08872
But also here's a fantastic essay on the subject: "Now, in the age of the internet—when the Library of Alexandria could fit on a medium-sized USB stick and the collected wisdom of humanity is available with a click—we’re engaged in a rather large, depressingly inept social experiment of downloading endless knowledge while offloading intelligence to machines. (Look around to see how it’s going). That’s why convincing students that intelligence is a skill they must cultivate through hard work—no shortcuts—has become one of the core functions of education."
https://www.forkingpaths.co/p/the-death-of-the-student-essayand
87 notes · View notes
river-taxbird · 2 years ago
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes · View notes
apollos-boyfriend · 1 year ago
Text
overhearing my 10 year old cousin watch a video talking about the dangers of misinformation in things like chatgpt and how you should not just mistrust the information but also the intentions of people attempting to sell it to you…… the kids are alright ❤️
133 notes · View notes
alpaca-clouds · 2 months ago
Text
We need to talk about AI
Okay, several people asked me to post about this, so I guess I am going to post about this. Or to say it differently: Hey, for once I am posting about the stuff I am actually doing for university. Woohoo!
Because here is the issue. We are kinda suffering a death of nuance right now, when it comes to the topic of AI.
I understand why this happening (basically everyone wanting to market anything is calling it AI even though it is often a thousand different things) but it is a problem.
So, let's talk about "AI", that isn't actually intelligent, what the term means right now, what it is, what it isn't, and why it is not always bad. I am trying to be short, alright?
Tumblr media
So, right now when anyone says they are using AI they mean, that they are using a program that functions based on what computer nerds call "a neural network" through a process called "deep learning" or "machine learning" (yes, those terms mean slightly different things, but frankly, you really do not need to know the details).
Now, the theory for this has been around since the 1940s! The idea had always been to create calculation nodes that mirror the way neurons in the human brain work. That looks kinda like this:
Tumblr media
Basically, there are input nodes, in which you put some data, those do some transformations that kinda depend on the kind of thing you want to train it for and in the end a number comes out, that the program than "remembers". I could explain the details, but your eyes would glaze over the same way everyone's eyes glaze over in this class I have on this on every Friday afternoon.
All you need to know: You put in some sort of data (that can be text, math, pictures, audio, whatever), the computer does magic math, and then it gets a number that has a meaning to it.
And we actually have been using this sinde the 80s in some way. If any Digimon fans are here: there is a reason the digital world in Digimon Tamers was created in Stanford in the 80s. This was studied there.
But if it was around so long, why am I hearing so much about it now?
This is a good question hypothetical reader. The very short answer is: some super-nerds found a way to make this work way, way better in 2012, and from that work (which was then called Deep Learning in Artifical Neural Networks, short ANN) we got basically everything that TechBros will not shut up about for the last like ten years. Including "AI".
Now, most things you think about when you hear "AI" is some form of generative AI. Usually it will use some form of a LLM, a Large Language Model to process text, and a method called Stable Diffusion to create visuals. (Tbh, I have no clue what method audio generation uses, as the only audio AI I have so far looked into was based on wolf howls.)
LLMs were like this big, big break through, because they actually appear to comprehend natural language. They don't, of coruse, as to them words and phrases are just stastical variables. Scientists call them also "stochastic parrots". But of course our dumb human brains love to anthropogice shit. So they go: "It makes human words. It gotta be human!"
It is a whole thing.
It does not understand or grasp language. But the mathematics behind it will basically create a statistical analysis of all the words and then create a likely answer.
Tumblr media
What you have to understand however is, that LLMs and Stable Diffusion are just a a tiny, minority type of use cases for ANNs. Because research right now is starting to use ANNs for EVERYTHING. Some also partially using Stable Diffusion and LLMs, but not to take away people'S jobs.
Which is probably the place where I will share what I have been doing recently with AI.
The stuff I am doing with Neural Networks
The neat thing: if a Neural Network is Open Source, it is surprisingly easy to work with it. Last year when I started with this I was so intimidated, but frankly, I will confidently say now: As someone who has been working with computers for like more than 10 years, this is easier programming than most shit I did to organize data bases. So, during this last year I did three things with AI. One for a university research project, one for my work, and one because I find it interesting.
The university research project trained an AI to watch video live streams of our biology department's fish tanks, analyse the behavior of the fish and notify someone if a fish showed signs of being sick. We used an AI named "YOLO" for this, that is very good at analyzing pictures, though the base framework did not know anything about stuff that lived not on land. So we needed to teach it what a fish was, how to analyze videos (as the base framework only can look at single pictures) and then we needed to teach it how fish were supposed to behave. We still managed to get that whole thing working in about 5 months. So... Yeah. But nobody can watch hundreds of fish all the time, so without this, those fish will just die if something is wrong.
The second is for my work. For this I used a really old Neural Network Framework called tesseract. This was developed by Google ages ago. And I mean ages. This is one of those neural network based on 1980s research, simply doing OCR. OCR being "optical character recognition". Aka: if you give it a picture of writing, it can read that writing. My work has the issue, that we have tons and tons of old paper work that has been scanned and needs to be digitized into a database. But everyone who was hired to do this manually found this mindnumbing. Just imagine doing this all day: take a contract, look up certain data, fill it into a table, put the contract away, take the next contract and do the same. Thousands of contracts, 8 hours a day. Nobody wants to do that. Our company has been using another OCR software for this. But that one was super expensive. So I was asked if I could built something to do that. So I did. And this was so ridiculously easy, it took me three weeks. And it actually has a higher successrate than the expensive software before.
Lastly there is the one I am doing right now, and this one is a bit more complex. See: we have tons and tons of historical shit, that never has been translated. Be it papyri, stone tablets, letters, manuscripts, whatever. And right now I used tesseract which by now is open source to develop it further to allow it to read handwritten stuff and completely different letters than what it knows so far. I plan to hook it up, once it can reliably do the OCR, to a LLM to then translate those texts. Because here is the thing: these things have not been translated because there is just not enough people speaking those old languages. Which leads to people going like: "GASP! We found this super important document that actually shows things from the anceint world we wanted to know forever, and it was lying in our collection collecting dust for 90 years!" I am not the only person who has this idea, and yeah, I just hope maybe we can in the next few years get something going to help historians and archeologists to do their work.
Tumblr media
Make no mistake: ANNs are saving lives right now
Here is the thing: ANNs are Deep Learning are saving lives right now. I really cannot stress enough how quickly this technology has become incredibly important in fields like biology and medicine to analyze data and predict outcomes in a way that a human just never would be capable of.
I saw a post yesterday saying "AI" can never be a part of Solarpunk. I heavily will disagree on that. Solarpunk for example would need the help of AI for a lot of stuff, as it can help us deal with ecological things, might be able to predict weather in ways we are not capable of, will help with medicine, with plants and so many other things.
ANNs are a good thing in general. And yes, they might also be used for some just fun things in general.
And for things that we may not need to know, but that would be fun to know. Like, I mentioned above: the only audio research I read through was based on wolf howls. Basically there is a group of researchers trying to understand wolves and they are using AI to analyze the howling and grunting and find patterns in there which humans are not capable of due ot human bias. So maybe AI will hlep us understand some animals at some point.
Heck, we saw so far, that some LLMs have been capable of on their on extrapolating from being taught one version of a language to just automatically understand another version of it. Like going from modern English to old English and such. Which is why some researchers wonder, if it might actually be able to understand languages that were never deciphered.
All of that is interesting and fascinating.
Again, the generative stuff is a very, very minute part of what AI is being used for.
Tumblr media
Yeah, but WHAT ABOUT the generative stuff?
So, let's talk about the generative stuff. Because I kinda hate it, but I also understand that there is a big issue.
If you know me, you know how much I freaking love the creative industry. If I had more money, I would just throw it all at all those amazing creative people online. I mean, fuck! I adore y'all!
And I do think that basically art fully created by AI is lacking the human "heart" - or to phrase it more artistically: it is lacking the chemical inbalances that make a human human lol. Same goes for writing. After all, an AI is actually incapable of actually creating a complex plot and all of that. And even if we managed to train it to do it, I don't think it should.
AI saving lives = good.
AI doing the shit humans actually evolved to do = bad.
And I also think that people who just do the "AI Art/Writing" shit are lazy and need to just put in work to learn the skill. Meh.
However...
I do think that these forms of AI can have a place in the creative process. There are people creating works of art that use some assets created with genAI but still putting in hours and hours of work on their own. And given that collages are legal to create - I do not see how this is meaningfully different. If you can take someone else's artwork as part of a collage legally, you can also take some art created by AI trained on someone else's art legally for the collage.
And then there is also the thing... Look, right now there is a lot of crunch in a lot of creative industries, and a lot of the work is not the fun creative kind, but the annoying creative kind that nobody actually enjoys and still eats hours and hours before deadlines. Swen the Man (the Larian boss) spoke about that recently: how mocapping often created some artifacts where the computer stuff used to record it (which already is done partially by an algorithm) gets janky. So far this was cleaned up by humans, and it is shitty brain numbing work most people hate. You can train AI to do this.
And I am going to assume that in normal 2D animation there is also more than enough clean up steps and such that nobody actually likes to do and that can just help to prevent crunch. Same goes for like those overworked souls doing movie VFX, who have worked 80 hour weeks for the last 5 years. In movie VFX we just do not have enough workers. This is a fact. So, yeah, if we can help those people out: great.
If this is all directed by a human vision and just helping out to make certain processes easier? It is fine.
However, something that is just 100% AI? That is dumb and sucks. And it sucks even more that people's fanart, fanfics, and also commercial work online got stolen for it.
And yet... Yeah, I am sorry, I am afraid I have to join the camp of: "I am afraid criminalizing taking the training data is a really bad idea." Because yeah... It is fucking shitty how Facebook, Microsoft, Google, OpenAI and whatever are using this stolen data to create programs to make themselves richer and what not, while not even making their models open source. BUT... If we outlawed it, the only people being capable of even creating such algorithms that absolutely can help in some processes would be big media corporations that already own a ton of data for training (so basically Disney, Warner and Universal) who would then get a monopoly. And that would actually be a bad thing. So, like... both variations suck. There is no good solution, I am afraid.
And mind you, Disney, Warner, and Universal would still not pay their artists for it. lol
However, that does not mean, you should not bully the companies who are using this stolen data right now without making their models open source! And also please, please bully Hasbro and Riot and whoever for using AI Art in their merchandise. Bully them hard. They have a lot of money and they deserve to be bullied!
Tumblr media
But yeah. Generally speaking: Please, please, as I will always say... inform yourself on these topics. Do not hate on stuff without understanding what it actually is. Most topics in life are nuanced. Not all. But many.
28 notes · View notes
ghostjelliess · 5 months ago
Text
Tumblr media
Had to get serious with a .3 mm pen cus the .5 jelly rolls smudged too much 😭 now I get why Muji pens are like that.
Tumblr media Tumblr media
Learning essential phrases, thanks Duo! 🙏
Just curious how it translates! It worked!
Oh, I wonder how my worksheets translate. It will look funny, like: soup soup soup soup soup soup...
Ope... 👀 Should have stayed curious. 😒
Tumblr media Tumblr media
38 notes · View notes
dead-sp1der · 4 months ago
Text
I love hearing Martyn talk about the Misadventure NPC AI's because it's still definitely Generative AI, just probably not the unethical kind
If it's a handcrafted Language Model (which.. wow that's crazy impressive) that's trained on non-stolen data, I can't see an ethical reason to not use it
Still very much Generative AI tho 😭 <3
39 notes · View notes
these-trans-hands · 3 months ago
Text
Duolingo becoming ai garbage wasn't on my list of 2025 predictions, but I'm not surprised unfortunately
Tumblr media Tumblr media Tumblr media
Not 100% perfect? On a language learning app? Does that mean we're just supposed to be ok being taught bad information so that you can pay less employees? Piss off.
19 notes · View notes