Tumgik
#Digital copyright and AI training
sohanur24 · 7 months
Text
The most recent copyright lawsuit in the realm of AI centers around Mike Huckabee and his literary works
Authors Take Legal Action Against Tech Giants in AI Copyright Lawsuit In a significant legal development, a group of authors, including former Arkansas Governor Mike Huckabee and prominent Christian writer Lysa TerKeurst, are suing tech behemoths Meta and Microsoft, alongside other entities such as EleutherAI and Bloomberg, over the unauthorized use of their literary works in the development of…
Tumblr media
View On WordPress
0 notes
circusinthewalls · 20 days
Text
NSFW Ghost Rambling - 18+ MDNI, AGELESS BLOGS DNI [Masterlist]
Simon knows damn well how big he is. In height, in stature, in.. everything, really. Even dear Soap is dwarfed at least somewhat by him. So, yeah, he knows how massive he is.
And admittedly now, as he's leaning over you, thick digits gently, gradually working you open, he doesn't know quite how to feel about it. Rare is it that the man ever gets considerably worried about anything, but he's just not sure how he's supposed to fit.
Much like everyone else, you're smaller than him. Perhaps not by a considerable amount, but you feel so goddamn tight around his fingers. Part of him is worried his dick will outright detach when he goes to pull out later.
Uncertainty is twisted on his face, and you think he's bound to burn two new holes in your junk if he stares at it with that perplexed expression any longer. You reach down for his wrist, trying to both soothe and ground him despite your own oncoming pleasurable brain fog.
Pulled from his thoughts, he looks up to meet your gaze and swallows. He can already tell what you're thinking. What you're about to ask. His hand stills for a moment.
"Don't want to hurt you."
"You won't," you breathe in response. "It'll be okay, yeah? We'll be okay."
He doesn't answer initially, seeming to contemplate your words. Soaking them in and visibly relaxing before he nods.
"Yeah, love."
When he's actually pushing into you, his eyes are trained on your face. One hand is cupping it, and the other thumbs slow circles over the thigh it's holding open. Not once does he look away. He's watching, ready at any second to internally reprimand himself if-
You whine. His hips stop dead immediately.
There's a sort of look he gives you. Questioning the noise in the silence that's followed it.
"Want it," you plead. "Simon.."
He has more trust in you than anything else in this world, honestly, and the way your brows are knitted, eyes fluttering just so with each soft pant.. How did he ever get so lucky? Whatever higher power it is that's spared him long enough to let him have you, he thanks for it.
He leans down farther, propping himself up on one elbow so he can tuck his face into your neck as he starts up again. With every sound that escapes, he peppers your throat with kisses. Each one makes the dull, burning ache from the stretch so very worth it.
"So good," he murmurs, like sinking into you is breathing new life into him. "Takin' it like a champ. Fuckin' beautiful."
He does eventually still again, not quite bottomed out, but content with the depth since you are. He just holds you and let's you adjust, grinding forward ever so carefully on occasion while he continues to whisper praises until you're ready for him to properly move.
Copyright © circusinthewalls on Tumblr. 2024. All Rights Reserved. Writing not permitted for reposting, transcription, translation or to use with AI technologies.
525 notes · View notes
mariacallous · 2 days
Text
In a product demo last week, OpenAI showcased a synthetic but expressive voice for ChatGPT called “Sky” that reminded many viewers of the flirty AI girlfriend Samantha played by Scarlett Johansson in the 2013 film Her. One of those viewers was Johansson herself, who promptly hired legal counsel and sent letters to OpenAI demanding an explanation, according to a statement released later. In response, the company on Sunday halted use of Sky and published a blog post insisting that it “is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.”
Johansson’s statement, released Monday, said she was “shocked, angered, and in disbelief” by OpenAI’s demo using a voice she called “so eerily similar to mine that my closest friends and news outlets could not tell the difference.” Johansson revealed that she had turned down a request last year from the company’s CEO, Sam Altman, to voice ChatGPT and that he had reached out again two days before last week’s demo in an attempt to change her mind.
It’s unclear if Johansson plans to take additional legal action against OpenAI. Her counsel on the dispute with OpenAI is John Berlinski, a partner at Los Angeles law firm Bird Marella, who represented her in a lawsuit against Disney claiming breach of contract, settled in 2021. (OpenAI’s outside counsel working on this matter is Wilson Sonsini Goodrich & Rosati partner David Kramer, who is based in Silicon Valley and has defended Google and YouTube on copyright infringement cases.) If Johansson does pursue a claim against OpenAI, some intellectual property experts suspect it could focus on “right of publicity” laws, which protect people from having their name or likeness used without authorization.
James Grimmelmann, a professor of digital and internet law at Cornell University, believes Johansson could have a good case. “You can't imitate someone else's distinctive voice to sell stuff,” he says. OpenAI declined to comment for this story, but yesterday released a statement from Altman claiming Sky “was never intended to resemble” the star, adding, “We are sorry to Ms. Johansson that we didn’t communicate better.”
Johansson’s dispute with OpenAI drew notice in part because the company is embroiled in a number of lawsuits brought by artists and writers. They allege that the company breached copyright by using creative work to train AI models without first obtaining permission. But copyright law would be unlikely to play a role for Johansson, as one cannot copyright a voice. “It would be right of publicity,” says Brian L. Frye, a professor at the University of Kentucky’s College of Law focusing on intellectual property. “She’d have no other claims.”
Several lawyers WIRED spoke with said a case Bette Midler brought against Ford Motor Company and its advertising agency Young & Rubicam in the late 1980s provides a legal precedent. After turning down the ad agency’s offers to perform one of her songs in a car commercial, Midler sued when the company hired one of her backup singers to impersonate her sound. “Ford was basically trying to profit from using her voice,” says Jennifer E. Rothman, a law professor at the University of Pennsylvania, who wrote a 2018 book called The Right of Publicity: Privacy Reimagined for a Public World. “Even though they didn't literally use her voice, they were instructing someone to sing in a confusingly similar manner to Midler.”
It doesn’t matter whether a person’s actual voice is used in an imitation or not, Rothman says, only whether that audio confuses listeners. In the legal system, there is a big difference between imitation and simply recording something “in the style” of someone else. “No one owns a style,” she says.
Other legal experts don’t see what OpenAI did as a clear-cut impersonation. “I think that any potential ‘right of publicity’ claim from Scarlett Johansson against OpenAI would be fairly weak given the only superficial similarity between the ‘Sky’ actress' voice and Johansson, under the relevant case law,” Colorado law professor Harry Surden wrote on X on Tuesday. Frye, too, has doubts. “OpenAI didn’t say or even imply it was offering the real Scarlett Johansson, only a simulation. If it used her name or image to advertise its product, that would be a right-of-publicity problem. But merely cloning the sound of her voice probably isn’t,” he says.
But that doesn’t mean OpenAI is necessarily in the clear. “Juries are unpredictable,” Surden added.
Frye is also uncertain how any case might play out, because he says right of publicity is a fairly “esoteric” area of law. There are no federal right-of-publicity laws in the United States, only a patchwork of state statutes. “It’s a mess,” he says, although Johansson could bring a suit in California, which has fairly robust right-of-publicity laws.
OpenAI’s chances of defending a right-of-publicity suit could be weakened by a one-word post on X—“her”—from Sam Altman on the day of last week’s demo. It was widely interpreted as a reference to Her and Johansson’s performance. “It feels like AI from the movies,” Altman wrote in a blog post that day.
To Grimmelmann at Cornell, those references weaken any potential defense OpenAI might mount claiming the situation is all a big coincidence. “They intentionally invited the public to make the identification between Sky and Samantha. That's not a good look,” Grimmelmann says. “I wonder whether a lawyer reviewed Altman's ‘her’ tweet.” Combined with Johansson’s revelations that the company had indeed attempted to get her to provide a voice for its chatbots—twice over—OpenAI’s insistence that Sky is not meant to resemble Samantha is difficult for some to believe.
“It was a boneheaded move,” says David Herlihy, a copyright lawyer and music industry professor at Northeastern University. “A miscalculation.”
Other lawyers see OpenAI’s behavior as so manifestly goofy they suspect the whole scandal might be a deliberate stunt—that OpenAI judged that it could trigger controversy by going forward with a sound-alike after Johansson declined to participate but that the attention it would receive from seemed to outweigh any consequences. “What’s the point? I say it’s publicity,” says Purvi Patel Albers, a partner at the law firm Haynes Boone who often takes intellectual property cases. “The only compelling reason—maybe I’m giving them too much credit—is that everyone’s talking about them now, aren’t they?”
336 notes · View notes
what-eats-owls · 1 year
Text
a point about the IA situation that I cannot make on twitter without death threats
Like many authors, I have complicated feelings about the IA lawsuit. IA has a whole raft of incredibly invaluable services, that's not in dispute. The current eBook licensing structure is also clearly not sustainable. Neither was IA's National Emergency Library, which was unrestricted lending of unlicensed digital copies. There are some thoughtful posts about how their argument to authors, "you'll be paid in exposure," is not especially compelling.
But I'm not here to discuss that; I'm here to talk about the licensing. TL;DR I don't want my work being fed into an AI or put on the blockchain, and to enforce that, you need a license.
So, here's the thing. IA's argument for the NEL boils down to "if we possess a physical copy of the book we should be able to do what we want" and that's frankly unserious. (Compare: Crypto Bros who thought spending $3 million on a single copy of a Dune artbook meant they owned the copyright.) Their claim is that by scanning a physical copy of the book and compiling the scans into a digital edition, that is sufficiently transformative to be considered fair use.
What that gives them is something that functions almost identically to an eBook, without the limitations (or financial obligations) of an eBook license. And I'm sure some of you are thinking, "so what, you lose six cents, get over yourself," but this isn't actually about the money. It's about what they can do with the scans.
A license grants them the right to use the work in specific, limited ways. It also bars them from using it in ways that aren't prescribed.
For example, what if IA decides to expand their current blockchain projects and commit their scanned book collections to the blockchain "for preservation"? Or what if IA decides to let AI scrapers train on the scanned text? One of their archivists sees AI art as a "toy" and "fears [AI art] will be scorned by the industry's gatekeeping types."
Bluntly, an unlicensed, unrestricted collection seems to be what they're gunning for. (Wonky but informative thread from a lawyer with a focus on IP; this cuts to the pertinent part, but the whole thing's good reading.) The Authors Guild is in no way unbiased here, but in the fifth paragraph of this press release, they claim that they offered to help IA work out a licensing agreement back in 2017, and got stonewalled. (They also repeat this claim on Twitter here.)
At the end of the day, I don't want the IA to fold; I don't think anyone does. As a matter of fact, I'd be open to offering them an extremely cheap license for Controlled Digital Lending. (And revamping eBook library licensing while we're at it.) I think there's a lot of opportunity for everyone to win there. But IA needs to recognize that licenses exist for a reason, not just as a cash grab, and authors have the right to say how their work is used, just like any artist.
1K notes · View notes
not-terezi-pyrope · 4 months
Note
Ok. It's pretty clear you are more welcoming of AI, and it does have enough merits to not be given a knee jerk reaction outright.
And how the current anti-ai stealing programs could be misused.
But isn't so much of the models built on stolen art? That is one of the big thing keeping me from freely enjoying it.
The stolen art is a thing that needs to be addressed.
Though i agree that the ways that such addressing are being done in are not ideal. Counterproductive even.
I could make a quip here and be like "stolen art??? But the art is all still there, and it looks fine to me!" And that would be a salient point about the silliness of digital theft as a concept, but I know that wouldn't actually address your point because what you're actually talking about is art appropriation by generative AI models.
But the thing is that generative AI models don't really do that, either. They train on publicly posted images and derive a sort of metadata - more specifically, they build a feature space mapping out different visual concepts together with text that refers to them. This is then used at the generative stage in order to produce new images based on the denoising predictions of that abstract feature model. No output is created that hasn't gone through that multi-stage level of abstraction from the training data, and none of the original training images are directly used at all.
Due to various flaws in the process, you can sometimes get a model to output images extremely similar to particular training images, and it is also possible to get a model to pastiche a particular artist's work or style, but this is something that humans can also do and is a problem with the individual image that has been created, rather than the process in general.
Training an AI model is pretty clearly fair use, because you're not even really re-using the training images - you're deriving metadata that describes them, and using them to build new images. This is far more comparable to the process by which human artists learn concepts than the weird sort of "theft collage" that people seem to be convinced is going on. In many cases, the much larger training corpus of generative AI models means that an output will be far more abstracted from any identifiable source data (source data in fact is usually not identifiable) than a human being drawing from a reference, something we all agree is perfectly fine!
The only difference is that the AI process is happening in a computer with tangible data, and is therefore quantifiable. This seems to convince people that it is in some way more ontologically derivative than any other artistic process, because computers are assumed to be copying whereas the human brain can impart its own mystical juju of originality.
I'm a materialist and think this is very silly. The valid concerns around AI are to do with how society is unprepared for increased automation, but that's an entirely different conversation from the art theft one, and the latter actively distracts from the former. The complete refusal from some people to even engage with AI's existence out of disgust also makes it harder to solve the real problem around its implementation.
This sucks, because for a lot of people it's not really about copyright or intellectual property anyway. It's about that automation threat, and a sort of human condition anxiety about being supplanted and replaced by automation. That's a whole mess of emotions and genuine labour concerns that we need to work through and break down and resolve, but reactionary egg-throwing at all things related to machine learning is counterproductive to that, as is reading out legal mantras paraphrasing megacorps looking to expand copyright law to over shit like "art style".
I've spoken about this more elsewhere if you look at my blog's AI tag.
157 notes · View notes
kristinagehrmann · 9 months
Text
The US Copyright Office is currently asking for input on generative AI systems ...
... to help assess whether legislative or regulatory steps in this area are warranted. Here is what I wrote to them, and what I want as a creative professional: AI systems undermine the value of human creative thinking and work, and harbor a danger for us creative people that should not be underestimated. There is a risk of a transfer of economic advantage to a few AI companies, to the detriment of hundreds of thousands of creatives. It is the creative people with their works who create the data and marketing basis for the AI companies, from which the AI systems feed. AI systems cannot produce text, images or music without suitable training material, and the quality of that training material has a direct influence on the quality of the results. In order to supply the systems with the necessary data, the developers of those AI systems are currently using the works of creative people - without consent or even asking, and without remuneration. In addition, creative professionals are denied a financial participation in the exploitation of the AI results created on the basis of the material. My demand as a creative professional is this: The works and achievements of creative professionals must also be protected in digital space. The technical possibility of being able to read works via text and data mining must not legitimize any unlicensed use! The remuneration for the use of works is the economic basis on which creative people work. AI companies are clearly pursuing economic interests with their operation. The associated use of the work for commercial purposes must be properly licensed, and compensated appropriately. We need transparent training data as an access requirement for AI providers! In order to obtain market approval, AI providers must be able to transparently present this permission from the authors. The burden of proof and documentation of the data used - in the sense of applicable copyright law - lies with the user and not with the author. AI systems may only be trained from comprehensible, copyright-compliant sources.
____________________________
You can send your own comment to the Copyright Office here: https://www.regulations.gov/document/COLC-2023-0006-0001
My position is based on the Illustratoren Organisation's (Germany) recently published stance on AI generators: https://illustratoren-organisation.de/2023/04/04/ki-aber-fair-positionspapier-der-kreativwirtschaft-zum-einsatz-von-ki/
170 notes · View notes
Text
How To Use AI To Fake A Scandal For Fun, Profit, and Clout
Or, I Just Saw People I Know To Be Reasonable Fall For A Fake "Ripoff" And Now I'm Going To Gently Demonstrate What Really Happened
So, we all know what people say about AI. It's just an automatic collage machine, it's stealing your data (as if the rest of the mainstream internet isn't - seriously, we should be using that knee-jerk disgust response to demand better internet privacy laws rather than try to beef up copyright so that compliance has to come at the beginning rather than the end of the process and you can be sued on suspicion of referencing, but I digress...), it can't create anything novel, some people go so far as to claim it's not even synthesizing anything, but just acting as a search engine and returning something run through a filter and "proving" it by "searching" for their own art and "finding" it.
And those are blatant lies.
The thing is, the reason AI is such a breakthrough - and the reason we memed with it so hard when DALL-E Mini and DALL-E 2 first dropped - is because it CAN create novel output. Because it CAN visualize the absurd ideas that no one has ever posted to the internet before. In fact, it would be a bigger breakthrough in computer science if we DID come up with an automatic collage machine - something that knows where to cut out a part of one image and paste it onto another, then smooth out the lighting and colors to make them fairly consistent, to make it look like what we would recognize as an image we're asking for? That would make the denoising algorithm on steroids that a diffusion model is look like child's play.
But, unlike the posts that claim that they're just acting as a collage maker at best and a search engine at worst, I'm not going to ask you to take my word for it (and stick a pin in this point, we'll come back to it later). I'm going to ask you to go to Simple Stable (or Craiyon, or the Karlo demo, if Google Colab feels too complicated for you - or if you like, do all of the above) and throw in a shitpost prompt or two. Ask for a velociraptor carousel pony ridden by a bunny. Ask for Godzilla fighting a wacky waving inflatable arm flailing tube man. Ask for an oil painting of a capybara wearing an ornate princess gown. Shitpost with it like we did before these myths took hold.
Now take your favorite result(s) and reverse image search them. Did you get anything remotely similar to your generated image? Probably not!
So then, how did someone end up getting a near perfect recreation of their work? Was that just some kind of wacky, one-in-a-million coincidence?
Well - oh no, look at that, I asked it for a simplistic character drawing and it happened to me too, it just returned a drawing of mine that I never even uploaded, and it's the worst drawing I've done since the fifth grade even just to embarrass me! Oh no, what happened, did they change things right under my nose, has digital surveillance gotten even WORSE?? Look, see, here's the original on the left, compare it to the output on the right - scary!! They're training on the contents of your computer in real time now, aaaagh!!
Tumblr media Tumblr media
Except, of course, for the fact that the entire paragraph above was a lie and I did this on purpose in a way no one could possibly recreate from a text prompt, even with a perfect description.
How?
See, some models have this nifty little function called img2img. It can be used for anything from guiding the composition of your final image with a roughly drawn layout, to turning a building into a dragon...to post-processing of a hand-drawn image, to blatantly fucking lying about how AI works.
I took 5 minutes out of my day to crudely draw a character. I uploaded the image to this post. I saved the post as a draft. I stuck the image URL in the init_image field in Simple Stable, cranked the init strength up to 0.8, cleared all text prompts, and ran it. It did exactly what I told it to and tried to lightly refine the image I gave it.
If you see someone claiming that an AI stole their image with this kind of "proof", and the image they're comparing is not ITSELF a parody of an extremely well-known piece such as the Mona Lisa, or just so extremely generic that the level of similarity could be a coincidence (you/your favorite artist do/es not own the rule of thirds or basic fantasy creatures, just to name one family of example I've seen), this is what happened.
So from here you must realize that it is deeply insidious that posts that make these claims usually imply or even outright state that you should NOT try to recreate this but instead just take their word for it, stressing ~DON'T FEED THE MACHINE~. It's always some claim about "ohhh, the more you use them, the more they learn, I made a SACRIFICE so you don't have to" - but txt2img functions can't use your interaction to learn jack shit. There's no new information in a text prompt for them TO learn. Most img2img models can't learn from your input either, for that matter! I still recommend being careful about corporate img2img toys - we know that Facebook, for instance, is happy to try and beef up facial recognition for the WORST possible reasons - but if you're worried about your privacy and data harvesting, any given txt2img model is one of the least worrying things on the internet today.
So do be careful with your privacy online, and PLEASE use your very understandable knee-jerk horror response to how much extremely personal content can be found in training databases as a call to DEMAND better privacy laws ("do not track" should not be just for show ffs) and compliance with security protocols in fields that deal with very private information (COMMON CRAWL DOESN'T GO FAR OUT OF ITS WAY, IT SHOULD NEVER HAVE BEEN ABLE TO GET ANY MEDICAL IMAGES THE PATIENTS DIDN'T SHARE THEMSELVES HOLY SHIT, SOME HOSPITAL WORKERS AND/OR MEDICAL COMMUNICATIONS DEVELOPERS BETTER BE GETTING FIRED AND/OR SUED) - but don't just believe a convenient and easy-to-disprove lie because it aligns with that feeling.
417 notes · View notes
glitchphotography · 1 year
Text
Tumblr media Tumblr media
If u think losing a gig to an AI is hard, try losing gigs because your bosses are racist or xenophobic or homophobic or sexist, or try not getting gigs at all because bosses dont think your work is as legitimate as the dude who can draw 100 identical spidermans.
There are serious structural issues with pay-to-play AI services like Dalle2 and it’s centered on how these companies data laundered copyrighted works using fair use laws and research institutions to privatize a tech that should be considered public infrastructure for everyone.
But AI Art itself isn’t evil, its a tool that has been used by new media artists for at least a decade. There’s obviously ethical ways to create AI art: train your own models, create outputs based out of your own works, attribute the artists you use in your prompts, etc.
But suddenly caring about copyright like you are now team disney and team nintendo is weak. And seriously, most of your artstation works aren’t original either. Yall living off of borrowed aesthetics from 100 years of comic books and cartoons and illustrations.
AIs can’t plagiarize the way humans do. You are seeing a calculated average of images. The reason shit looks like your favorite illustrators is because a lot of these illustrators make similar art, and most people writing AI prompts have similar basic tastes. Making great AI Art from prompts takes time and patience and a keen sense of poetics.
But seriously, y’all don’t hate new tech, you hate capitalism and the corpos and bosses who are out to expropriate you.
AI Art, if anything, is the new folk art. Same repeated motifs made by anyone with a clue. This is a wonderful mingling of collective creative energies. Embrace it!
Addendum for all the reactionary responses out there:
~~~~~~~~ Artists should be getting royalties from OpenAI, Midjourney, et al. And they should be able to opt in or out of having their work included in training models. This is a given and I would never argue against compensating artists!  ~~~~~~~~
This isn’t about defending these corpos either, but machine learning tech has been around before these companies started their thing and experimental artists from around the world have been using machine learning to make great art.
Another thing: the moment you post your digital art to a platform, you sign away much of your consent w/r/t how your art is used. Thats what those really long TOS are about. ArtStation and istock were scraped for data under the pretext of Fair Use, which allows or mass scraping internet data for research purposes. Fair Use is like the one law that for the most part, protects artists from the disneys and nintendos of the world. I wouldn’t be able to glitch video games without it. Emulated videogames wouldnt exist with out it either So the question is, why are corporations allowed to use Fair Use as a cover for developing privatized pay-to-play services? People who know a thing, will point out that Stable Diffusion is open source, and that’s great, but why are privatized services allowed to be built on open source infrastructure? Especially when this tech hasn’t been properly vetted for racial biases, pr0nography, etc
Yes its shitty, but these arent arguments against AI tech but against juridical structures under capitalist regimes.
259 notes · View notes
makiruz · 1 year
Text
Oh wait, I kinda don't like the argument of the "stolen artwork for AI"; I'm not saying anything about it because I care about artists getting robbed of their labor, or outright getting their artwork reposted without credit; but the argument around AI art feels wrong, and I think I now know why: it's because I hate copyright
Let me explain from the beginning: AI art is machine learning, a machine is trained on how to do things, in this case what art is, and then it does it; to train an AI you gotta input data, in this case, art; you show the machine art, it learns what art is, it reproduces it later. So naturally the data needs to come from somewhere and right now they're using images available online; and here's where it becomes a problem: many artists are worried about it because that technology can be used to replace them in the professional setting and bad faith actors can use it for fraud; so they feel AI is stealing their work; but I don't think it's stealing, I think what the AI is doing is using the collective knowledge of humanity of advance technology and that's not a bad thing, I don't think the Internet Archive and Google books making digital copies of all books in existence is a bad thing either
The reason this is about copyright is because copyright is supposedly about protecting creators rights (which it doesn't do, but that's a bigger discussion); and most works on the internet have copyright, and the owners of that copyright, in this case the artists, feel their copyright is being violated. I'm not sure if machine learning really constitutes copyright violation any more than using existing artworks as reference does, but honestly I do not care; copyright gets in the way of creative pursuits over and over, just look at the issue of music sampling, so I do not care if AI is using copyrighted material for training
I care if people are using AI to replicate another artist work without permission, I care if companies use AI to underpay artists, I do not care if technological developments and artistic project use copyrighted material as reference
118 notes · View notes
therobotmonster · 1 year
Note
i wondered if you had anything i could read about your perspective on ai art stuff. im wary of it as an artist whos had trouble finding work, some of the jobs i would normally do as a contractor are being replaced with ai that in some cases very closely copies specific peoples labor, and the wages are dropping and youre expected to deliver much quicker to compete with the machines, which is my main issue. in a vacuum, it would be a cool tool, but i worry about making rent.
I have a lot of discussion about it in my posts on deepdreamnights.tumblr.com, under the fold on various posts. My stances on IP have been consistent for a very long time. I think copyright terms need to be slashed, fair use is essential, and that people do not own culture.
I'm not going to lie, jobs will be lost over this tech. It is to no one's advantage to pretend that a technology this powerful won't change the landscape. The advances in computer assisted animation put hand animators out of work, but it also opened up the possibility of small teams doing independent animation online. I think we'll see similar outcomes here.
I'm also an artist who works on commission. I am mainly a 3d modeller and graphic designer, but I do digital painting, photomanipulation, character design. The AI hasn't come for 3d quite yet, but it's coming. Graphic design has been very automated for quite some time. I'm not a carpetbagger or an outside invader, I'm a working artist, in the trenches, adapting to the new tools.
By working AI into my workflow, I've been able to ramp up my abilities. I can mock up mood pieces and concepts with AI, offer a client far more options than I'd have the time to do on spec, and then develop those ideas manually and with AI assistance.
I've used image prompting to work up wild variations on my own character designs to break through artist block and get a quick glimpse of whether a direction is interesting to go before putting my increasingly stiff, pain-riddled fingers to drawing it out by hand. There's opportunities there, and a trained artist with AI as a tool is going to outstrip an artistic layperson with the same gear.
There's also going to be opportunities to sell your abilities on the things that AI does poorly. Consistency, responsiveness to client feedback, cohesion with aspects like narrative. Symbolism and visual wit. Getting it right, or close to, on the first attempt.
That's part of why I share my processes and my prompts. I want people to have an honest understanding of what the tech is capable of, and actual facts about how it works. I hammer home how everything is curated, because while I have posted hundreds of AI generated and assisted pieces...
Tumblr media
Those are the cream of the crop.
I hope this helps.
136 notes · View notes
thotpuppy · 7 months
Note
If you’re so vehemently against ai fanart what’s your take on fanmade graphics and edits that use stolen pictures from the internet? These fans didn’t go out with their camera to capture the images themselves, they searched through Pinterest and google and found photos taken by real photographers and, without asking permission, stole these images to edit and create into something else. Yet this practice is widely accepted amongst fandom, but the second ai is involved it’s akin to murder? Even when the fans have clearly uploaded said ai generations into photoshop to edit them first? Seems like a pick-and-choose double standard to me.
Okay first of all, let's address the fact that - going by the language you've used here - you've already decided how I feel about it, so why bother the pretense of "asking"? Dishonesty breeds Discontent. Don't lie to someone's face and expect them to be kind, yeah?
Secondly, almost every single person I know who works with image manipulation uses assets they DO have the rights to outside of specific actor's likenesses. There are millions of photos, graphics, illustrations, paintings, etc. that are open for free personal AND commercial use allllll over the internet and people use them liberally. And, actually, many, MANY of these people DO go out and take their own photographs, so I don't know where your misconception is coming from.
And, the big kicker - they also don't lie about it. They say when something is an edited photo, if it's not obvious.
Stolen images being used in edits is NOT widely accepted and is in fact generally pretty damn frowned upon in most art circles, and I absolutely don't support use of them.
Every single instance of fanart is a rocky road as far as IP infringement goes, but don't put fanartists on the same level as the industry professionals providing celebrity model photos. Unlike in AI and Art Theft, when those photos are used that's bringing attention to the figure in question, not taking it away from an actual creator.
Lastly, where the FUCK do you get off saying ANYONE is comparing AI use to murder? Persecution complex much? I get it, you wanna be a victim so bad, but you're the one stealing from people at YOUR industry level.
AI in general has a metric fuckton of potential to be something genuinely useful to artists of all walks, but the CURRENT industry is too unethically sourced. We need to get control of the market, get stolen works (including, once again, STOLEN LEAKED MEDICAL RECORDS) out of the training data, even if that means starting over from scratch.
Also, we need to get the bullies who think it's okay to do shit like spam a Machine Learning program with a single artist's work to harass them offline, target voice actors who have asked not to have their voices used into harassment campaigns, or lie to celebrities by selling them commercial rights to ML-generated fanart, which right now, they legally CAN'T DO. Because the copyrightable legitimacy of AI/ML works IS currently in debate in courts in the US. So.
Maybe instead of assuming everyone is out to get you, Anon, try not being a douchebag and stealing from fellow fanartists? And if you're gonna steal anyways, at least be honest and don't lie to people trying to convince them it's actually a digital illustration.
We can see the weird, fake blending. We can see the extra fingers, or utensils clipping through plates, or hands disappearing into heads, or shoes that don't end where shoes end. And while I'm not going out of my way to confront anyone about it, I'll just stay here on my own blog, blocking people who post AI, I am not the only person who is angry and disgusted at the lack of integrity and blatant disrespect.
14 notes · View notes
westeroswisdom · 8 months
Text
George R.R. Martin and a number of other authors have filed a class action suit against OpenAI.
This suit is being fought over copyright law.
The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses. George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work. “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.
The suit makes reference to attempts by OpenAI to complete ASoIaF artificially.
The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.
This is not the first effort to keep GPT systems from pillaging works of literature.
More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.
At least in the United States, the law has not kept up with digital technology.
@neil-gaiman
13 notes · View notes
mariacallous · 10 months
Text
You know it’s bad when the cocreator of The Matrix thinks your artificial intelligence plan stinks. In June, as the Directors Guild of America was about to sign its union contract with Hollywood studios, Lilly Wachowski sent out a series of tweets explaining why she was voting no. The contact’s AI clause, which stipulates that generative AI can’t be considered a “person” or perform duties normally done by DGA members, didn’t go far enough. “We need to change the language to imply that we won’t use AI in any department, on any show we work on,” Wachowski wrote. “I strongly believe the fight we [are] in right now in our industry is a microcosm of a much larger and critical crisis.”
On Thursday, that crisis hit another major milestone when the Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA)—went on strike. Like the Writers Guild of America, which is also on strike, one of the biggest disputes was over AI. Leading up to the strike, one SAG member told Deadline that actors were beginning to see Black Mirror’s “Joan Is Awful” episode as a “documentary of the future” and another told the outlet that the streamers and studios—which include Warner Bros., Netflix, Disney, Apple, Paramount, and others—“can’t pretend we won’t be used digitally or become the source of new, cheap, AI-created content.”
A few weeks ago, I wrote about the WGA strike and its parallels with the Luddite labor movement. Like the Luddites, writers worry about new forms of automation taking their jobs, but also aren’t anti-tech hard-liners. If AI tools could be used to help writers—to, say, drum up new names for some sci-fi planet—they could serve a purpose without threatening anyone’s livelihood. If writers could be trained to use large language models as tools, that’s one thing. But if they’re used in lieu of writers, or used to write scripts that humans need to fix for lower fees, that’s a problem, the WGA argues. Ultimately, they want a say in how AI gets used in filmmaking.
Actors want that, too. But the way AI could impact their work looks very different. Unlike writers, actors can’t necessarily be trained to use those tools to produce their work—the AI was trained on them. Yes, if generative AI creates, say, a scene in a film, actors will have to be hired to give those performances, but it’s easy to see why they want protections on the use of their likenesses—and are willing to strike to get them.
Hollywood’s glitzy stars taking a stand to keep AI in check feels like a turning point, especially this week when the US Federal Trade Commission also launched an investigation into ChatGPT maker OpenAI. The FTC is looking into OpenAI’s data collection practices and its potential to give consumers bad information, but these things happening at once create a sense that AI is about more than just asking ChatGPT to write poetry or getting Stable Diffusion to draw a fish on a bicycle.
Though AI’s potential to impact human labor has been a topic of conversation for months, in recent days those conversations have begun to bubble over across industries. This week, the WGA East slammed G/O Media over its use of AI, following a Star Wars article that appeared on Gizmodo full of errors. The union called AI-generated articles an “existential threat to journalism” and noted the similarities between journalists and the striking screenwriters. Meanwhile, on Monday, comedian Sarah Silverman became the face of a pair of class-action lawsuits against OpenAI and Meta, accusing the companies of copyright infringement for allegedly training their AIs on her book The Bedwetter. Hulk actor Mark Ruffalo backed her, saying it “will most likely become a landmark case.”
Will any of this stop the rise of the bots? No. It doesn’t even negate that AI could be useful in a lot of fields. But what it does do is demonstrate that people are paying attention—especially now that bold-faced names like Meryl Streep and Jennifer Lawrence are talking about artificial intelligence. On Tuesday, Deadline reported that the Alliance of Motion Picture and Television Producers, which represents the studios, was prepared for the WGA to strike for a long time, with one exec telling the publication “the end game is to allow things to drag on until union members start losing their apartments and losing their houses.” Soon, Hollywood will find out if actors are willing to go that far, too.
15 notes · View notes
beardedmrbean · 8 months
Text
US authors George RR Martin and John Grisham are suing ChatGPT-owner OpenAI over claims their copyright was infringed to train the system.
Martin is known for his fantasy series A Song of Ice and Fire, which was adapted into HBO show Game of Thrones.
ChatGPT and other large language models (LLMs) "learn" by analysing a massive amount of data often sourced online.
The lawsuit claims the authors' books were used without their permission to make ChatGPT smarter.
OpenAI said it respected the rights of authors, and believed "they should benefit from AI technology".
Other prominent authors named in the complaint include Jonathan Franzen, Jodi Picoult and George Saunders.
The case has been brought to the federal court in Manhattan, New York, by the Authors Guild, a trade group in the US working on behalf of the named authors.
According to the filing, it accused OpenAI of engaging in "systematic theft on a mass scale".
It follows similar legal action brought by comedian Sarah Silverman in July, as well as an open letter signed by authors Margaret Atwood and Philip Pullman that same month calling for AI companies to compensate them for using their work.
A spokesperson for OpenAI said: "We're having productive conversations with many creators around the world, including the Authors Guild, and have been working co-operatively to understand and discuss their concerns about AI.
"We're optimistic we will continue to find mutually beneficial ways to work together."
AI 'displacing humans'
The case argues that the LLM was fed data from copyrighted books without the permission of the authors, in part because it was able to provide accurate summaries of them.
The lawsuit also pointed to a broader concern in the media industry - that this kind of technology is "displacing human-authored" content.
Patrick Goold, reader in law at City University, told BBC News that while he could sympathise with the authors behind the lawsuit, he believed it was unlikely it would succeed, saying they would initially need to prove ChatGPT had copied and duplicated their work.
"They're actually not really worried about copyright, what they're worried about is that AI is a job killer," he said, likening the concerns to those screenwriters are currently protesting against in Hollywood.
"When we're talking about AI automation and replacing human labour... it's just not something that copyright should fix.
"What we need to be doing is going to Parliament and Congress and talking about how AI is going to displace the creative arts and what we need to do about that in the future."
The case is the latest in a long line of complaints brought against developers of so-called generative AI - that is, artificial intelligence that can create media based on text prompts - over this concern.
It comes after digital artists sued text-to-image generators Stability AI and Midjourney in January, claiming they only function by being trained on copyrighted artwork.
And OpenAI is also facing a lawsuit, alongside Microsoft and programming site GitHub, from a group of computing experts who argue their code was used without their permission to train an AI called Copilot.
None of these lawsuits has yet been resolved.
13 notes · View notes
thenyanguardparty · 5 months
Note
I ask this in good faith, because I disagree and want to understand your point of view. "
"so called 'ai art' requires no effort to produce compared to Real Art"
RESPONSE: you are putting the labor of artists you recognize as such above those of the people responsible for programming, training, and curating the end products of the machine learning algorithm" Is it really bad faith to say that someone using a tool they had no part in creating is low effort? I don't really oppose ai because I get why free art (like, not copyrighted) is important. However I don't respect ai art, and dont think that everything is art. I think self expression is the heart of art, and that people can make "art" solely in the name of money making consumerism, and even when they put effort in I don't consider it art either. I want to know why im so stuck up, what is it that im not getting, if you can put it into other words than "it just is art" Im captivated and in love with your response style and want to know more about this specific topic. Mwah
if someone is using "ai art" purely for money making they wouldn't be otherwise paying someone to do self-expression, they would be paying someone to make the money making commercial "not art" you speak of. it's the same as complaining the power loom to an artisanal weaver
as far as "real art" goes, what is the argument against the use of machine learning for self-expression that does not apply to photography, digital art, collage, music sampling, etc. and eventually it all comes back to duchamp's fountain. what is real art? it's about self-expression you say. well one can do self expression with machine learning, just as one can with photoshop, with photography, with digital tools, etc. where do you draw the line without taking a conservative cultural stance?
also my answers were like that because i was just finishing reading umineko episode 5 tonight
6 notes · View notes
pizzaronipasta · 8 months
Note
Hey, I just wanna say that, as a disabled person who at first vehemently disagreed with you, reading your pinned post really helped me understand your perspective and I can't, in good faith, say that I entirely disagree with you. AI art could be a very good creative resource for people, and I also disagree with a lot of art snobbery surrounding 'real Art' anyway. BUT if AI art programs were trained on datasets comprised of only the art of consenting artists, I don't think this would be as big of a debate. The only thing I have an issue with is you blaming the proliferation of data scraping on 'bad actors' when it feels like, at the moment, that 'bad actors' are intrinsically tied to what AI art is, and that those "bad actors" are controlling the entirety of neural network learning. Imo as of right now AI art is just data theft and if/when we reach the point where that isn't the case the conversation that you're trying to have can be expanded upon, but until then I don't see the majority of artists agreeing that copyright theft is a valid way to help the disabled community. And I personally disagree with stealing other people's art to further my own creative abilities.
First of all, thank you very much for being polite and presenting your thoughts in good faith.
I understand where you're coming from; the AI industry as a whole is deeply fraught with ethical issues. However, I don't think that art theft is one of them. You see, digital art only exists as abstract data. This means it could only be "owned" as intellectual property under copyright law. Intellectual property is purely a legal construct; it was invented for the purpose of incentivizing innovation, rather than to uphold the philosophical principles of ownership. I find that it makes very little sense that people should be able to own ideas—after all, it's seemingly human nature to adopt others' ideas into our own viewpoints, and to spread our ideas to others. In fact, there is an entire field of study, called memetics, dedicated to this phenomenon. So, I don't think that data scrapers are guilty of art theft. There is, however, an argument to be made that they are guilty of plagiarism. Scraped AI training databases generally do not credit the original authors of their contents, though they do disclose that the contents are scraped from the internet, so they aren't exactly being passed off as the curators' own work. Make of that what you will—I'm not really sure where I stand on the matter, other than that I find it questionable at best. Either way, though, I believe that training an AI makes transformative use of the training data. In other words, I don't think that training an AI can qualify as plagiarism, even though compiling its training dataset can. Rather than art theft or plagiarism, I think the biggest ethical problem with the AI industry's practices is their handling of data. As I mentioned in my pinned post, the low standard of accountability is putting the security of personal and sensitive information at risk.
Feel free to disagree with me on this. I acknowledge that my stances on IP certainly aren't universal, and I know that some people might not be as concerned about privacy as I am, but I hope my reasoning at least makes sense. One last thing: a lot of AI development is funded by investments and grants, so refusing to use AI may not be very impactful in terms of incentivizing better ethics, especially considering that ethics-indifferent businesses can be a fairly reliable source of revenue for them. That's why I think seeking regulation is the best course of action, but again, feel free to disagree. If you aren't comfortable with the idea of supporting the industry by using its products, that's perfectly valid.
#ai
7 notes · View notes