Tumgik
#the ways the people misunderstand copyright law
Text
dear lord
#the ways the people misunderstand copyright law#there is no de minimus standard for copyright#NONE#and to say that search engine scraping is the same as scraping for generative AI and therefore fair use... dude no#fair use has to be non competitive with the original rights holder#and generally non commercial#you cannot say in good faith that these plagiarism machines are non competitive#they are actively promoting and going after the ability to make output in a specific artist's style#AND THEN THEY'RE CHARGING PEOPLE MONEY FOR IT#and the ones that aren't /currently/ will be eventually#this isn't a tool for FINDING someone's creative work the way a search engine it#it's a tool for OBSCURING the author's involvement#and then promoting someone saying copyright should only last a decade??? WHAT??#that's shorter than a patent and patents are meant to be the shortest IP term by design#we used to havd shorter copyright terms in this country and guess what? the disneys of the day didn't suffer#the artists were the ones who got screwed over#and to say collective bargaining is going to fix the issue is... well... not uh... supported by history#look up the formation of ASCAP#how they went on strike#and the creation of BMI#understand that artists had their careers entirely derailed as a result and lost their livelihoods because of corporate greed#and like I don't love the ways that sample clearance has evolved#(especially thinking of Fat Boy Slim not getting any royalties from The Rockafeller Skank)#BUT it is a system that could work#OR we look at something like a mechanical#where artists are just automatically paid for use of their work in a dataset#but like#just a massive misunderstanding of the current state and history of copyright law there#and just for the record YES SONNY BONO WAS A MISTAKE AND LIFE + 70 IS EXCESSIVE#but a single decade?? just say you hate working artists and be done with it
4 notes · View notes
hbmmaster · 2 years
Text
the thing with ai art is that while I agree that there are many ethical issues with it, the way people tend to talk about it has been full of complete fundamental misunderstandings of what ai art actually is and how it works. I want to hashtag support real artists but I don't want to align myself with people who think ai art is literally sampling images from the internet in real time (people explain it like it's a collage so often but it's really nothing like a collage) or people who think the "solution" to ai art is more strict copyright law
1K notes · View notes
carriesthewind · 1 year
Text
Since I have wadded into the discourse, a few clarifications (below the cut b/c I've taken up enough of people's dashes):
I am not on the side of the publishers in this case. They suck too; in general, the IA is a good organization and the big publishers are the bad guys! In fact, the publishers are also (a) bad guy in this case - they are and will ABSOLUTELY try to use this case to increase the limits on libraries and digital lending. They are profit driven monsters who would destroy every library on earth if they could.
That's actually part of why I'm so mad about how this case is being talked about! Because by their own actions, the IA has put this case in a position where the IA literally cannot win unless their "National Emergency Library" (aka just stealing books) is held to be legal. That doesn't mean it's worse case scenario if they lose their appeal - for example, an appeals court could issue a narrow ruling on the facts of the case. It just means there is no good scenario where they outright win. But, if they appeal their very bad case with its very bad facts, that increases the risk of creating precedent that limits libraries and digital lending. If you think it's worth the risk, that's fine - but ignoring the actual risks and likelihood of success isn't "supporting libraries;" it's acting based on ignorance (at best).
Digital publishing and lending, as it currently legally exists, really really sucks. There is a reason why, although many authors and author organizations dislike the IA's "Open Library," other authors like it and have written in support of it!* There is a reason why, although I have linked to author criticism of IA and of its "Controlled Digital Lending" and stated that I think it clearly violates existing copyright law, I have not stated (and will not state) a personal opinion on whether I think CDL (as it exists in the one-to-one, owned to loaned theory, rather than how it was implemented in practice) is less undesirable as a copyright schema than existing digital copyright law. The IA and its supporters are right that current copyright law doesn't allow for the real ownership or preservation of e-books! They are right that publishers greedily limit libraries' and patrons' access to e-books, and they are right about how bad the current system is! (I do not endorse any praise for library e-reader apps and lenders in the notes of my posts. While I enjoy and support using them, from a library perspective, they also suck - specifically, the predatory charges for e-books lending suck libraries dry.) The IA is broadly on the right(er) side of this issue: but they still did a shitty, dumb thing, and they are deliberately misrepresenting the situation in an extremely offensive and potentially harmful way!
I already emphasized this in the original post, but just in case: I do not hate the IA. I do not want them to go away. I very much like a lot of the things they do and want them to keep existing! (Although I would add: if they actually believe their principles they should absolutely have set things up to preserve - at minimum - their archival functions as much as possible if they - as an organization - cease to exist.) But! I would also like, at a minimum, for them to stop deliberately misrepresenting the situation. Their stated mission is "universal access to all knowledge." If they cared about that mission - or if they really see themselves as librarians, and not just a database of products - they should maybe start making sure the public is getting accurate, useful, and meaningful knowledge about this case, instead of propaganda to inflate their own egos.
*Although I would be VERY curious to know what kind of briefing of the actual factual and legal situation some of the authors who signed onto that open letter received. Based on the snippets I read, some were clearly well-informed; many were just talking broadly about how great libraries are; and some indicated a serious factual misunderstanding of how both libraries, the IA, and digital lending in general work.
500 notes · View notes
not-that-syndrigast · 1 month
Text
F1 and independent content creation
Recently, especially on Tiktok, F1 content creators apparently got letters stating they have to change their branding, as in take ‘F1’ out of their username and stop using exclusive F1TV content. There has been quite an uproar around this happening, with many fans not understanding why liberty media would forbid free marketing for them and complaining about it, so i would like to explain the choices made and how this influences content creation way less than most assume.
First of all, I have not seen the apparent letters myself, thus i can only guess what exactly it states, but as far as i see it, it only states that content creators have to keep copyrighted content, like the name or content out of their videos. It is not forbidden to continue to post anything, although I could see that changing with the current behavior of a few people.
A lot of people on tik tok claim that F1 content creators were the reason why they got interested in F1 and that it’s a shame Liberty media has used this for years but now that they got all the fans try to keep the independent content creators down. In itself this claim explains liberty medias choices; F1 content creators have gotten big, with thousands of followers to the point that they make money of F1 content, which is a whole legal affair in itself but i’ll even skip that point (although i’d be very happy if someone that has more knowledge of the law would explain it closer). Liberty media cannot control what these accounts post, which sounds good in the first place, but can become really dangerous and annoying to F1 fast.
Let me give you an example; we are going to imagine a girl now, let's call her Katie (for legal reasons, this is not framing but an example. As of 18.08.2024, none of these accounts or people exist to my knowledge). Katie is a big F1 fan, so she decides to make a tik tok account called ‘F1UpToDate’. She quickly gains thousands of followers by making videos about grand prixs, silly season, you’ve all seen the videos. Well now Katie is not a saint, no one is, so she makes a video saying racist things. Said video gets millions of views and now liberty media has a problem; due to the account name, they are directly tied to anything problematic she posts or says and there are many people misunderstanding that Katie is independent from F1. It sounds outlandish, but currently, there are quite a few F1 content creators that are big enough to be confused with an official F1 account. Now liberty media has to put out press releases and talk about how they have nothing to do with the post,... Instead of waiting for the shoe to drop, they are currently preventing anything like this from happening, especially since no one can predict what bad things people could actually do.
Now I’ve seen quite a few people also confused, why not just employ F1 content creators? They are well loved by the fans and could attract more fans, but people underestimate how expensive that actually is. As far as I see it, there's only two ways that would work, and in both cases the content creators would lose the independence that makes their videos so fun. Either, they have to get every single video by PR managers of liberty media approved, which would mean less funny and spontaneous videos, or they would only be allowed to post by a script, which again, would make them dependent. 
In the end, if we go back to our example, had the account been called ‘MotorsportsUpToDate’, there would still be outrage if something bad was posted, but at least both the content creator would have been independent and liberty media not affected. People will still be able to find content creators and they’ll still be able to attract new fans, a new username won’t change that.
To be honest, I'm even surprised liberty media let them get away this long with getting money from their name. Again, I'm not well versed in law, but I am aware that money creates these situations too.
But like always, I'd like to know what other people think of this, if you agree or maybe know more about the law than I do in this case.
29 notes · View notes
radiofreederry · 2 years
Note
Oh my god of course u support AI art lmfaoooo
i wouldn't even call my stance one of support, as I've said before I'm neutral toward it and don't use it in my creative pursuits. What I oppose is people misunderstanding how AI art works, misunderstanding how fair use doctrine works, and leveraging that misunderstanding in such a way that they end up supporting a draconian expansion of copyright and intellectual property law which would be disastrous for all forms of creative expression and in particular derivative works. Oh, and I also oppose the individualist gripes of aspirational petty bourgeois who have decided to use AI art as an outlet for their anxieties regarding their upward mobility and class position in the form of Luddism.
199 notes · View notes
mxoleander · 8 months
Text
On Palworld using AI art;
short answer, I can find zero evidence of this (but feel free to prove me wrong) but it's not impossible
long answer, this seems to be a mix of misunderstanding and their past history. (read more so I don't clog feeds with my wall of text)
The CEO of PocketPair has made posts on Twitter saying how impressed he is with AI art generation, that is true, but the post most people point at he credits in the same thread. It was made by a Buzzfeed employee that was using ruDall-E (minimaxir on twitter) he created a series of fakemon and the CEO commented how it could fool him.
The company of PocketPair is also pro-AI use, and they most certainly have made a game using AI art before, but it does not appear Palworld is one of them. I feel like explaining the game adds context and then you can come to your own conclusions as to if it's ethical use or not. AI: Art Imposter is a game based around AI art generation, each player is given a theme and must give the built-in image generator a prompt that matches that theme, however one of you is an imposter who does not know the theme and must feed the generator a prompt either vauge enough or by pure luck matches the theme and try not to get voted out. This game is similar to the popular game Suck Up! which involves the use of an AI text generator, to the point where the website makes it extremely clear you are purchasing cryptocurrency to make the game work. My only gripe is we do not know where the art is coming from, it could be from the public domain or their own office sure, or it could be stolen which is honestly more likely.
The CEO did state once in 2022 "In about 30 years, the general public’s perception of copyright may have changed considerably.” in reference to AI art generation. I think it's entirely possible due to his past statements AI was used in the design process, not the direct assets. This is again a subject of heavy debate and a matter of opinion regarding ethics, as to whether or not you consider inspiration from AI art to also be a form of stealing. If you've ever "stole" or "fixed" AI art adopts this process is more than likely what was done here. I think this is what he meant in regards to copyright, as the original question was if you feed the same image into a generator enough times is it then considered an original idea, I could be wrong however.
I get the game with my game pass subscription either way, but I understand the need to have this sort of information before making a purchase. I personally do not like the use of AI and tried to remain as neutral as I could by providing other similar examples even if I do not personally agree with them, but I'm sure my bias will shine somewhere. As an artist AI is definitely something I worry about but I can see how it can be used as a proper tool once more laws are put into place regarding proper credit and payment to the artists used.
11 notes · View notes
dm-clockwork-dragon · 2 years
Note
What are your thoughts concerning the OGL 1.1 possibilities, and your work? Will you stick with 5/5.5e, or be looking at a new system?
Hoo Boy... this is gonna be a can of worms.
I have had the chance to actually read the OGL 1.1 (or what is to my knowledge the most recently sent out version), and I will link that here for anyone who wants to read the Actual Document, instead of sensationalized headlines and youtube reactions.
There's certainly some stuff n there that will be a pain in the ass - attaching a copy of the OGL to the end of all my content, and Making sure to document things better incase by some miracle I ever start making money off of this stuff. But buy-in large, it's not going to affect me, and I think that there is a lot of misinformation and panic going around the community, because so many of the people blowing the whistle aren't familiar with how to read legal documents, or haven't had the chance to read the new OGL for themselves.
I could be wrong on that, and I'm not coming out in support of the changes or anything. But after reviewing the document with the help of a legal eye, I don't think it is going to be as world-ending as people are passing around.
For one, a lot of people misunderstand what the OGL actually is. It is not a blanket document that covers all content. While it does dissolve the old OGL, it does not dissolve the Fan Content Policy, which is what most homebrew produced by individuals actually falls under. It also isn't the only License Agreement WotC issues - it's just the default. Despite the reactions of some third party publishers like Kobold Press, there is nothing stopping those who bring in more $750k gross a year (because it is $750k, not $75k, like I have seen thrown around in a few places) from reaching out for an individualized, custom license, and the wording of the new OGL actually encourages that. The big 25% in royalties is primarily a tactic to force that, actually. And even then, royalties are only paid on revenue beyond $750K, so if you bring in $750,001, you would owe exactly $0.25, on the $1 extra you made. WotC is also giving publishers a full year without any royalties, while they figure out their own licensing agreements or change models - it's not an immediate demand for money.
The other big concern I see going around is that this OGL effectively gives WotC ownership of any content produced under it. It explicitly does not. The clause people are up in arms about gives WotC a non-negotiable License to content created under the OGL, but it expressly leaves copywrite and ownership in the hands of the creator. that's a big distinction. While WotC may not be require to pay anyone royalties for content published under the OGL, they are required by law to credit appropriately, which opens them up for a whole host of legal trouble if they try to steal everyone's content like is being rallied against. If their goal was simply to steal our content, they would not have included this distinction
This same clause (or actually a substantially more broad one, which does give WotC ownership and Copyright), is actually already a part of the UELA on DMsguild and DriveThru RPG. While the wording is scary, and I would certainly like to see WotC better define the terms and intent here, the reality is that such clauses are pretty standard fair in situations like this, and primarily used by companies to prevent frivolous lawsuits from every creator who publishes something similar to what the license holder already had in the pipleline.
It's important to remember that this OGL is only as strong as WotC's intent and ability to enforce it. Hasbro is a big company, with a lot of weight to throw around, but that weight also gets in the way. They aren't going to come after individual creators for the same reason that Bethesda doesn't viciously hunt down everyone who mods skyrim - it's just not profitable to do so. This OGL is primarily targeted at big name publishers who have been abusing the lack of constraints on the Old OGL to make everything from full sourcebooks to royalty free merchandise and tv shows. (I big reason we are seeing it now, in fact, is that the critical roll TV show we were supposed to get from netflix is locked up in a legal battle over this).
At the end of the day, I'm probably going to receive a lot of hate for not jumping aboard the bandwagon here to roast WotC on a spit. But I am also not giving them my support. There are plenty of things I don't like about this update: It's incredibly vague in some areas, and especially does a poor job of defining what constitutes use of licensed or unlicensed materials. For example: If I reference a core spell not included in the SRD, or put it on the spell list for my warlock patron, am I violating the OGL? The Terms in this OGL would suggest so, but I highly doubt that is the intent, since that ruling would prevent creators from including things that encourage people to buy more content form WotC.
My main point here is that I think a lot more people need to actually read the document, and that I don't see this as any sort of game-ending event for D&D. But I've also seen this shit coming since I started, and personally built my "Brand" and business in such a way that the new OGL is going to have very little affect on me or my content, so I may be incredibly biased. Mostly, I just want people to do their own research, rather than jumping on the bandwagon of cancel culture. This could potentially have a lot of far reaching ramifications, but I don't think they are the ones people are up in arms about.
And if anyone thinks I'm the devil for asking them to read and form their own opinions...  Well the 15th century catholic church would agree.
36 notes · View notes
Text
More depth to Jimmy, specifically why the others rag on him all the time; they view him as disrespectful.
This isn't Jimmy hate btw, just...I wanted to add more reason for why they bully him. It's misunderstandings all around and eventually they get fixed and the bullying turns into small teasing and stuff
Jimmy was raised in tumble town until he was 14 Which he moved to a different city all together; I'll have to expand on the politics and kingdoms and cities and stuff later
He moves back, eventually because he discovered the cities police are corrupt and didn't want a part in that. He takes up the job of sheriff, well. One if them.....one of two- p.s. the sheriff's only need to know tumble towns laws
So while he knows all if tumble towns laws he doesn't fully knwo the others. He doesn't bother to read uo on them either because they have to be similar to tumble towns, right???
Nd the other empires figure that since he is in law enforcement, he knows their laws. So its all a big misunderstanding.
So he violates laws that the other empires have by accident, such as going to dawn past midnight
He also tries to press his laws onto the others, because he thinks they all share a law code which they don't but he doesn't know that, yet. So like when he bans the toys, Sausage listens to him ebcause the shop is on Jimmys land and so if it agnst his law then okay, he'll listen
Then Jimmy said shit about his son and oops. It looks like Jommy broke his nose!
Some other reasons are, he tries to copyright everything related to be a Sheriff, even of its not inspired form him
Plus distressing a gobland holiday with would be warden killing march monday, which is just the king goblin given a warden to the other empires to kill as a way to reduce the warden population and say 'hey, this is what we do, wanna join??' Now an empire can refuse to participate with a valid reason(example A: Shelby, because she's pregnant!)
Not to mention that destroying a 6 year olds toy because it is you in toy version doesn't exactly win you any favors among family driven people Like Shelby, Katherine, Scott, Fwhip, Lizzie, Sausage, Joel
Also he's taken their criminals before, because he has a jail better suited for them but he doesn't ask and just assumes they know. The others see this as 'you are incompetent and I'm better at this'
Havnt fully thought it out. But the tldr is: Jimmy doesn't read up on the others laws because he assumes they share laws and so breaks a ton of them and tries to force his onto them(by accident) so they get back at him by bullying him and breaking his laws
Eventually, Shelby snaps one day and yells at Jimmy and has to be comforted by Katherine. And Jimmy, quietly, admits he thiughr they ahd the same laws. Jimmy stays up all night writing apology letters to each person and promises to read up on their laws and stuff
16 notes · View notes
snickerdoodlles · 1 year
Text
just stupid rambling so I'm not annoying friends DMs lalala anyways
been trying to dissect why I'm always so beyond irritated when I see news articles/socmed chatter on AI and I haven't really come to any solid conclusion on it but I think I can kinda boil it down to two things:
first is general misinformation. it's just...so incredibly frustrating to see people wail about the ~threat~ AI poses to our society in conjunction with the huge fake news issues, while also spreading around news articles/essays/etc that are so poorly researched they're basically fake news as well. it's not even hard stuff to fact check! things like AO3 being scraped for AI training data (completely false, there's multiple sources against it but the logic doesn't even track) (isolated issue, but there's a lot of misunderstanding and misinformation on what was scraped for training data period), attempts to conflate the multiple copyright lawsuits as the same thing despite each dealing with different AI tech and sourcing issues (if these cases were similar, they would be linked together, just like the two authors suits were), AI putting 4000 US workers out of jobs in a month (everything references one news article with that headline, but said news article sites one economy report which never states anything of the sort), the general preconception that writers and actors are striking over AI (they're not. they're striking over residuals and lack of pay. AI is a part of discussions as they're covering multiple bases to prevent further exploitation of workers, but that's a distinctly different issue and it's wrong to place a stronger emphasis on AI than the exploitation. AI can't replace writers (though they can't afford the months of studios attempting to replace them with it) and they will want to at least ensure proper compensation for their content being used to train AI if they have to concede that point. there are already strict California state laws protecting individuals right to their image/voice/person, and while protective and preventative measures are currently state legislation and only present in about half the states, there are still federal level protections and precedents for people's right to publicity, and the US Senate is also currently drafting policy that would enact protections against use of personal image/voice/etc, open up more opportunities for recourse for people who have had their images stolen, and enact severe punishments on violators to thoroughly discourage the possibility of abuse. actors don't have to fight for this right in of itself because the LAW already covers them). and there's more than just these, but that last one segways into what is my second but I think bigger complaint:
people seem so like...enamored with the idea of individuals sticking it to the system that they're ignoring the very real systems that can make a difference. like. okay, 1) copyright won't solve AI issues. none of the current cases have the capability of making such a grand change, but that's a long discussion that will just detract from my point. but also 2) a system that's only reactionary is BAD. it shouldn't be on copyright lawsuits (or two worker strikes either for that matter) to stop AI abuse, which is why it's not. there are multiple discussions going on in the US Senate right now to create...a whole battery of preventative measures really, and a system to enact regulate and enforce them. things like extensive safety tests conducted by multiple parties before AIs are released to market, ways to prosecute for deliberate spread of misinformation or abuse of private or copyrighted data, more transparency on training data and engines capabilities, preventing and prosecuting the horrible abuse of outsourced workers (ala OpenAI and the Kenyan workers), better protections for people's personal and private data, ways to protect workers from companies exploiting them via AI, and more. and while what I was reading was the US Senate discussions, there's the emphasis that this has to be a worldwide effort and they're talking to multiple foreign governments. the politicians know they dropped the ball on properly regulating social media, they're sick of AI developer's bullshit excuses too, and they're working hard to do something about it and get ahead of the curve before people get hurt.
and like. besides the fact that it's extremely unfair to put so much pressure on individual cases to fix a systematic issue that's significantly larger and broader than the issue of scraped data, this is literally the way to stop AI abuse. preventative and proactive is far more important than reactionary. this is a way people can act and get involved directly.
I'm just so...annoyed every time I see people complaining that no one's doing anything about regulating AI, when there's a lot of very real and important work getting done. this is a not even slightly hopeless issue! that news is a lie! and sure, it's funny to imagine AI developers being forced to pay out millions or wipe their slates clean, but even if burning things to the ground was an option on the table (...it's probably not btws), it does nothing to address the issue that this tech exists and needs to be regulated to prevent human harm, so it's especially annoying to see so much like...triumph and gloating over lawsuits that haven't even properly begun and so little awareness of the policy work trying to break ground. gah.
also the "AI's detrimental because of fake news" complaints while engaging in apparently zero research or fact checking themselves. I'm extremely irritated over that 🤦
5 notes · View notes
Text
I'm an artist. Not a very good one, but not for lack of trying. I've practised for years and barely gotten any better, so the narrative of "anyone who likes AI generated art is just too lazy to learn real art" hurts quite a lot. I don't have the skill to draw what I want, or the money to pay another artist to do it. What I can do is draw something of my skill level, and feed it into an AI to get something better.
The way I see it, this concept itself isn't a problem. It's technological progress and it's fascinating! The problem is when people use an AI to make something and claim they drew it themself, or when the AI learns from somebody's art without their permission. Now, a lot of people claim it breaks copyright laws by essentially tracing different art pieces together, but that's a fundamental misunderstanding of how the program works. It learns and takes inspiration. Think of those story-writing programs. Do they steal works, do they copy and paste sentences from other people's stories to make their own? No. And it's the same basic neural network here.
But either way, whether it's copyright violation or not, that whole problem can be solved by having the AI only fed free to use images, which as far as I know was the original idea. Like always, the problem is shady people doing shady things, not the product they created. And like always, we can't stop thinking in black and white about it.
Maybe it will take away space for human artists in the future. But that's quite far off, and there will still be artists. There will still be a market for "real" or "authentic" art. Isn't it a good thing that more people have the ability to create? That anyone can bring their ideas to life by learning how to talk to a computer? Because it isn't as simple as pushing a button. It's a different skill, and it's one that I have whereas making my own art just isn't.
Instead of complaining and destroying the bad, unethical programs made by capitalist idiots, why aren't we asking for and promoting the ethical and helpful ones? This is a really cool thing that's being created! Don't tear down AI art as a concept, demand that it isn't fed your content if you don't want it to be!
2 notes · View notes
Text
if anything what this recent ai art scaremonder debate has proven to me is how little my fellow artists know about how ai actually works.
its fair to be concerned about your rights regarding your art being incorporated into an ai sourcebase. but you can do this whilst also not making up stuff about how ai is frankensteining bits of different artworks together when it produces imagery or that you can always see the original artists watermark on ai generated imagery or that mean nasty ~techbros~ are going around maliciously stealing peoples work on purpose or that ai is going to automate the entire visual creative industry. literally just massive misunderstandings of how data is gathered online and how the commercial-level art industry works
i also think its been incredibly dystopian to see artists respond to ai by calling for stricter copyright laws and tougher control of online data. ive even seen artists advocate for the visual art industry to model new ip laws after the music industry. the industry where small artists get sued by megacorporations for using generic chords in a song which just so happen to also show up in another song which said corporation owns all rights too. the industry where artists dont even own their own work. we want to emulate that industry? i don't even know how somebody would write up an "ai art" copyright law, but if they did, it would not be used just against ai art programs, but against small artists, digital artists, and archivists too.
i think what artists are actually angry about is how shitty this industry is. if our employers actually paid us a fair wage, valued our product, and assured us benefits and paid leave, then we wouldn't feel as threatened. but instead of asking ourselves why we're treated this way by capitalism we're grovelling at the feet of corporations to choose us and not the machines? if we owned our means of production and didn't have to work to survive then ai art would just be a cool new tool. (most professional artists are middle class kids whove never had to think about worker consciousness on a systemic level but whateva)
2 notes · View notes
becquerel · 2 years
Note
i dont think your comments were wrong but i think more people were jumping on landlady and sending nasty comments rather than defending her or speaking badly about the artist. idk, i dont think its really unreasonable for her to be upset and knowing that art culture is different in japan, i dont think it was an overreaction. especially bc shes made it clear she doesn't want like money and it seems like landlady herself doesnt plan on sueing or whatever. just the way some people are talking about her for being upset about and creating that boundary feels a bit weird to me. it doesnt feel like an unfair thing to be concerned about. i dont agree that using someones photo inherently makes the person not a "real" artist, but i can understand the frustration and the amount of people calling ppl braindead or just speaking so degradingly is really strange.
anyways, sorry, u didnt ask for this and pls feel free to delete it if it annoys u.
i have been called a retard about 3-4 times tonight. like i understand where ur coming from but people are unnecessarily hostile and condescending about this entire ordeal. not even mentioning all the comments about me being misinformed with bad intent :| or saying i misunderstand copyright law. no i fully understand copyright law. i just think its a broken system!
ive been saying from the beginning i think its completely reasonable for her to be annoyed and upset with people taking her photos and profiting off of them without contacting or crediting her! i think thats fine!! id be weirded out too!!
my ONLY CRITICISM is that i think trying to go full scorched earth and copyright fine this guy is a losing battle. we shouldn't be defending copyright in 2022 ffs.
anyway its just sorta annoying to have people @ me and only read the first two messages and be like. restate at me things i already said. ive said landlady was within her right to be upset at about the second message!
1 note · View note
jcmarchi · 3 months
Text
AI scaling myths
New Post has been published on https://thedigitalinsider.com/ai-scaling-myths/
AI scaling myths
So far, bigger and bigger language models have proven more and more capable. But does the past predict the future?
One popular view is that we should expect the trends that have held so far to continue for many more orders of magnitude, and that it will potentially get us to artificial general intelligence, or AGI.
This view rests on a series of myths and misconceptions. The seeming predictability of scaling is a misunderstanding of what research has shown. Besides, there are signs that LLM developers are already at the limit of high-quality training data. And the industry is seeing strong downward pressure on model size. While we can’t predict exactly how far AI will advance through scaling, we think there’s virtually no chance that scaling alone will lead to AGI. 
Research on scaling laws shows that as we increase model size, training compute, and dataset size, language models get “better”. The improvement is truly striking in its predictability, and holds across many orders of magnitude. This is the main reason why many people believe that scaling will continue for the foreseeable future, with regular releases of larger, more powerful models from leading AI companies.
But this is a complete misinterpretation of scaling laws. What exactly is a “better” model? Scaling laws only quantify the decrease in perplexity, that is, improvement in how well models can predict the next word in a sequence. Of course, perplexity is more or less irrelevant to end users — what matters is “emergent abilities”, that is, models’ tendency to acquire new capabilities as size increases.
Emergence is not governed by any law-like behavior. It is true that so far, increases in scale have brought new capabilities. But there is no empirical regularity that gives us confidence that this will continue indefinitely.
Why might emergence not continue indefinitely? This gets at one of the core debates about LLM capabilities — are they capable of extrapolation or do they only learn tasks represented in the training data? The evidence is incomplete and there is a wide range of reasonable ways to interpret it. But we lean toward the skeptical view. On benchmarks designed to test the efficiency of acquiring skills to solve unseen tasks, LLMs tend to perform poorly. 
If LLMs can’t do much beyond what’s seen in training, at some point, having more data no longer helps because all the tasks that are ever going to be represented in it are already represented. Every traditional machine learning model eventually plateaus; maybe LLMs are no different.
Another barrier to continued scaling is obtaining training data. Companies are already using all the readily available data sources. Can they get more?
This is less likely than it might seem. People sometimes assume that new data sources, such as transcribing all of YouTube, will increase the available data volume by another order of magnitude or two. Indeed, YouTube has a remarkable 150 billion minutes of video. But considering that most of that has little or no usable audio (it is instead music, still images, video game footage, etc.), we end up with an estimate that is much less than the 15 trillion tokens that Llama 3 is already using — and that’s before deduplication and quality filtering of the transcribed YouTube audio, which is likely to knock off at least another order of magnitude.
People often discuss when companies will “run out” of training data. But this is not a meaningful question. There’s always more training data, but getting it will cost more and more. And now that copyright holders have wised up and want to be compensated, the cost might be especially steep. In addition to dollar costs, there could be reputational and regulatory costs because society might push back against data collection practices.
We can be certain that no exponential trend can continue indefinitely. But it can be hard to predict when a tech trend is about to plateau. This is especially so when the growth stops suddenly rather than gradually. The trendline itself contains no clue that it is about to plateau. 
CPU clock speeds over time. The y-axis is logarithmic. [Source]
Two famous examples are CPU clock speeds in the 2000s and airplane speeds in the 1970s. CPU manufacturers decided that further increases to clock speed were too costly and mostly pointless (since CPU was no longer the bottleneck for overall performance), and simply decided to stop competing on this dimension, which suddenly removed the upward pressure on clock speed. With airplanes, the story is more complex but comes down to the market prioritizing fuel efficiency over speed.
Flight airspeed records over time. The SR-71 Blackbird record from 1976 still stands today. [Source]
With LLMs, we may have a couple of orders of magnitude of scaling left, or we may already be done. As with CPUs and airplanes, it is ultimately a business decision and fundamentally hard to predict in advance. 
On the research front, the focus has shifted from compiling ever-larger datasets to improving the quality of training data. Careful data cleaning and filtering can allow building equally powerful models with much smaller datasets.
Synthetic data is often suggested as the path to continued scaling. In other words, maybe current models can be used to generate training data for the next generation of models. 
But we think this rests on a misconception — we don’t think developers are using (or can use) synthetic data to increase the volume of training data. This paper has a great list of uses for synthetic data for training, and it’s all about fixing specific gaps and making domain-specific improvements like math, code, or low-resource languages. Similarly, Nvidia’s recent Nemotron 340B model, which is geared at synthetic data generation, targets alignment as the primary use case. There are a few secondary use cases, but replacing current sources of pre-training data is not one of them. In short, it’s unlikely that mindless generation of synthetic training data will have the same effect as having more high-quality human data.
There are cases where synthetic training data has been spectacularly successful, such as AlphaGo, which beat the Go world champion in 2016, and its successors AlphaGo Zero and AlphaZero. These systems learned by playing games against themselves; the latter two did not use any human games as training data. They used a ton of calculation to generate somewhat high-quality games, used those games to train a neural network, which could then generate even higher-quality games when combined with calculation, resulting in an iterative improvement loop. 
Self-play is the quintessential example of “System 2 –> System 1 distillation”, in which a slow and expensive “System 2” process generates training data to train a fast and cheap “System 1” model. This works well for a game like Go which is a completely self-contained environment. Adapting self-play to domains beyond games is a valuable research direction. There are important domains like code generation where this strategy may be valuable. But we certainly can’t expect indefinite self-improvement for more open-ended tasks, say language translation. We should expect domains that admit significant improvement through self-play to be the exception rather than the rule.
Historically, the three axes of scaling — dataset size, model size, and training compute — have progressed in tandem, and this is known to be optimal. But what will happen if one of the axes (high-quality data) becomes a bottleneck? Will the other two axes, model size and training compute, continue to scale?
Based on current market trends, building bigger models does not seem like a wise business move, even if it would unlock new emergent capabilities. That’s because capability is no longer the barrier to adoption. In other words, there are many applications that are possible to build with current LLM capabilities but aren’t being built or adopted due to cost, among other reasons. This is especially true for “agentic” workflows which might invoke LLMs tens or hundreds of times to complete a task, such as code generation.
In the past year, much of the development effort has gone into producing smaller models at a given capability level. Frontier model developers no longer reveal model sizes, so we can’t be sure of this, but we can make educated guesses by using API pricing as a rough proxy for size. GPT-4o costs only 25% as much as GPT-4 does, while being similar or better in capabilities. We see the same pattern with Anthropic and Google. Claude 3 Opus is the most expensive (and presumably biggest) model in the Claude family, but the more recent Claude 3.5 Sonnet is both 5x cheaper and more capable. Similarly, Gemini 1.5 Pro is both cheaper and more capable than Gemini 1.0 Ultra. So with all three developers, the biggest model isn’t the most capable!
Training compute, on the other hand, will probably continue to scale for the time being. Paradoxically, smaller models require more training to reach the same level of performance. So the downward pressure on model size is putting upward pressure on training compute. In effect, developers are trading off training cost and inference cost. The earlier crop of models such as GPT-3.5 and GPT-4 was under-trained in the sense that inference costs over the model’s lifetime are thought to dominate training cost. Ideally, the two should be roughly equal, given that it is always possible to trade off training cost for inference cost and vice versa. In a notable example of this trend, Llama 3 used 20 times as many training FLOPs for the 8 billion parameter model as the original Llama model did at roughly the same size (7 billion).
One sign consistent with the possibility that we won’t see much more capability improvement through scaling is that CEOs have been greatly tamping down AGI expectations. Unfortunately, instead of admitting they were wrong about their naive “AGI in 3 years” predictions, they’ve decided to save face by watering down what they mean by AGI so much that it’s meaningless now. It helped that AGI was never clearly defined to begin with.
Instead of viewing generality as a binary, we can view it as a spectrum. Historically, the amount of effort it takes to get a computer to program a new task has decreased. We can view this as increasing generality. This trend began with the move from special-purpose computers to Turing machines. In this sense, the general-purpose nature of LLMs is not new. 
This is the view we take in the AI Snake Oil book, which has a chapter dedicated to AGI. We conceptualize the history of AI as a punctuated equilibrium, which we call the ladder of generality (which isn’t meant to imply linear progress). Instruction-tuned LLMs are the latest step in the ladder. An unknown number of steps lie ahead before we can reach a level of generality where AI can perform any economically valuable job as effectively as any human (which is one definition of AGI). 
Historically, standing on each step of the ladder, the AI research community has been terrible at predicting how much farther you can go with the current paradigm, what the next step will be, when it will arrive, what new applications it will enable, and what the implications for safety are. That is a trend we think will continue.
A recent essay by Leopold Aschenbrenner made waves due to its claim that “AGI by 2027 is strikingly plausible”. We haven’t tried to give a point-by-point rebuttal here — most of this post was drafted before Aschenbrenner’s essay was released. His arguments for his timeline are entertaining and thought provoking, but fundamentally an exercise in trendline extrapolation. Also, like many AI boosters, he conflates benchmark performance with real-world usefulness.
Many AI researchers have made the skeptical case, including Melanie Mitchell, Yann LeCun, Gary Marcus, Francois Chollet, and Subbarao Kambhampati and others.
Dwarkesh Patel gives a nice overview of both sides of the debate.
Acknowledgements. We are grateful to Matt Salganik, Ollie Stephenson, and Benedikt Ströbl for feedback on a draft.
0 notes
yhwhrulz · 6 months
Text
Morning and Evening with A.W. Tozer Devotional for March 26
Tozer in the Morning Submitting to Christ's Lordship
No one has any right to believe that he is indeed a Christian unless he is humbly seeking to obey the teachings of the One whom he calls Lord. Christ once asked a question (Luke 6:46) that can have no satisfying answer, ?Why do you call me, `Lord, Lord,? and do not do what I say??
Right here we do well to anticipate and reply to an objection that will likely arise in the minds of some readers. It goes like this: ?We are saved by accepting Christ, not by keeping His commandments. Christ kept the law for us, died for us and rose again for our justification, and so delivered us from all necessity to keep commandments. Is it not possible, then, to become a Christian by simple faith altogether apart from obedience??
Many honest persons argue in this way, but their honesty cannot save their argument from being erroneous. Theirs is the teaching that has in the last fifty years emasculated the evangelical message and lowered the moral standards of the Church until they are almost indistinguishable from those of the world. It results from a misunderstanding of grace and a narrow and one-sided view of the gospel, and its power to mislead lies in the element of truth it contains. It is arrived at by laying correct premises and then drawing false conclusions from them.
Tozer in the Evening Spiritual Burdens and Worry Weights
It was not to the unregenerate that the words "Do not fret" were spoken, but to God-fearing persons capable of understanding spiritual things. We Christians need to watch and pray lest we fall into this temptation and spoil our Christian testimony by an irritable spirit under the stress and strain of life.
It requires great care and a true knowledge of ourselves to distinguish a spiritual burden from religious irritation. We cannot close our minds to everything that is happening around us. We dare not rest at ease in Zion when the church is so desperately in need of spiritually sensitive men and women who can see her faults and try to call her back to the path of righteousness. The prophets and apostles of Bible times carried in their hearts such crushing burdens for God's wayward people that they could say, "My tears have been my food day and night" (Psalms 42:3), and "Oh that my head were a spring of water and my eyes a fountain of tears! I would weep day and night for the slain of my people" (Jeremiah 9:1). Thes e men were heavy with a true burden. What they felt was not vexation but acute concern for the honor of God and the souls of men.
By nature some persons fret easily. They have difficulty separating their personal antipathies from the burden of the Spirit. When they are grieved they can hardly say whether it is a pure and charitable thing or merely irritation set up by other Christians having opinions different from their own.
Copyright Statement This material is considered in the public domain.
0 notes
financialsmatter · 2 years
Text
Jesus, Take the Wheel…
Tumblr media
Have you ever been to a point in your life when you felt like saying “Jesus, take the wheel?” If you’ve never felt that way than you’re probably not human. Because life has a way of throwing curve balls at you…trying to get you to strike out. But if you have felt that way then you were probably in a situation where it was difficult impossible to process all that was happening around you. The good news is – for those who are believers – when you submit your feelings to a Higher Authority * – in my case, Jesus – you should expect to see amazing results. For I am not ashamed of the gospel of Christ: for it is the power of God unto salvation to every one that believeth; to the Jew first, and also to the Greek. ~ Romans 1:16 ~ ( * Note: This email is not about religion…well, maybe it is…kinda, sorta) Right now you may be thinking “What does this have to do with investing?” But stay with me. Because for far too many people, money is their God. Take the Wheel And when unexpected events happen – that can have devastating effects on your money – is when you’ll want to cry out, “Jesus take the wheel.” Ironically (or NOT) the two major driving forces in the markets are FEAR and GREED. READ: No One is Immune to Fear and Greed  June 26, 2018  AND: Fear of Death or Fear of Losing Money  July 13, 2020 But while you can’t control the markets, you can control how fear and greed affects you. And the best way to control the F&G is to prepare for them well in advance. Huh? It’s the same principle as preparing your everyday life for the unexpected events that take you by surprise.   And it’s so simple to do that you have to have someone help you misunderstand it. Are you ready for it?   Don’t violate the laws of nature AND the laws of faith. Example:  If you violate the law of gravity by jumping off a 10-story building you will die. Rest assured Fear and Greed will sneak up on you like a thief in the night. And in the Political Chaos of 2023 we’ll see a lot of both. But when you understand the laws that govern them you won’t have to cry out, “Jesus take the wheel.” Learn more about how to apply these principles in our February edition of “…In Plain English” (HERE). You’ll thank us later. And share this with a friend…regardless of their beliefs. They’ll thank YOU later. As always: We’re Not Just About Finance But we use finance to give you hope. *********************************** Invest with confidence. Sincerely, James Vincent The Reverend of Finance Copyright © 2023 It's Not Just About Finance, LLC, All rights reserved. You are receiving this email because you opted in via our website. Read the full article
0 notes
thankskenpenders · 2 years
Text
Updates on the Boingkid shit (go read the previous post if you have no idea what I’m talking about) because yes he is still at it:
1. People (myself included) have wondered why the creator of Boingkid would only be taking issue with Belle’s design now, in the form of Twitter DMCAs, when she’s been in the comics for almost two years at this point. Why not go to IDW directly, and why not do so sooner? Well, here are some responses:
“We did from time to time because we were busy with other projects. We sent an email and got a response from IDW's secretary and she insist to know the details of what we want to share with the CEO. Despite the gut feeling we shared the potential of illegal action & get blocked.”
“We did not write even a single line about this publicly, until one of the IDW artists cry out that Twitter has accepted the claim. Then after that upon fans reach out we shared what happened. We tolerated this for 2 years as Belle was just a spinoff but they continue to bring her“
So, yeah. He’s supposedly believed this the whole time, but IDW ignored him because of course they did. (Lord knows how, exactly, he tried to contact them in the first place, or if he even sent his pitch to the right email.) It was Jen Hernandez publicly calling him out over the DMCA on Twitter that was the last straw, and now he’s making this extremely public in retaliation
2. He now seems to be demanding that IDW simply alter Belle’s face “to avoid any resemblance to other copyrighted work in US.” This would be reasonable if he had a case, but, again, he did not invent the concept of a character having a clown nose and freckles
3. I previously said the guy was from Italy (both his ArtStation account and the unsuccessful Kickstarter for the Boingkid game have their locations set as Rome, and the demo was shown at an expo in Rome), but people dug up the copyright registration for Boingkid and found out that he’s originally from Iran. Either way, t’s likely that a language barrier is part of the confusion here, as his English isn’t the best (although it’s certainly readable)
4. Much of his case, as he presents it on Twitter, is predicated on Twitter support believing him when he filed his DMCA claims. This obviously doesn’t hold any water as social media companies accept false DMCA claims all the goddamn time due to the inherently flawed nature of the law, but his fundamental misunderstanding of how this system works may be partially due to that language barrier
5. People keep comparing this guy to Penders. I just want everyone to understand that, even with his outlandish claims about Julie-Su and Shade being legally the same character and things like that, even Penders has waaaaaaay more to back up his  argument there than Boingkid guy has against Belle. Penders worked on Sonic for 13 years and the BioWare team literally said they were inspired by the comics. Boingkid guy is just some fucking guy no one’s heard of who allegedly got ghosted on a pitch to IDW. There’s no reason to believe that Evan even knew who he was before Friday
6. There’s a lot of question about the guy’s motives. Whether he actually believes this, or if it’s just a publicity stunt. I don’t think there’s any reason it can’t be both. He absolutely seems to believe his claim, at least to some extent, but he also seems to be relishing the attention
I feel cynical for saying this, but like. The guy’s been trying to make Boingkid a thing for years. The Kickstarter in 2017 only got nine backers for a total of $667 against a $53,000 goal. The team moved to a Patreon page which is now all but dead. If we believe his claim that he pitched the comics to IDW, that went nowhere. The demo for the game got a few positive previews, but as a dev myself believe me when I say that in this day and age a few blog posts are not enough to move the needle on their own. Again, I sympathize with the guy on that level, because he’s a good artist and GOD is it hard to make it out there even when you’re giving it your all. But this controversy is by far the most attention Boingkid as a brand has ever gotten. Thousands upon thousands of quote tweets for an account that had 150 followers at the start of this, and that follower count has only been going up. As they say, any press is good press. It’s hard not to look at that and assume the worst. If he had actually designed a character that was much more similar to Belle, or if I believed even for a second that Evan was the type of person who would plagiarize someone else’s work like that, then this would be different. But when the argument is so flimsy...?
I don’t believe he’s purely a troll, as some artists really are just like this. (Lord knows I’ve seen some people get in extremely heated feuds over superficial similarities between furry OCs and the like.) But at the same time, I do believe that at this point he’s acting in an intentionally incendiary way to get attention. Whether it’s a desperate attempt to drive attention to the Boingkid IP after years of floundering, or it’s purely to try and get IDW to respond to a genuine plagiarism claim and right a perceived wrong and nothing else, I can’t say. It’s quite likely a mixture of both, though
But either way, this whole situation continues to suck. I hope this is resolved soon. Belle’s a great character, and Evan, Jen, and anyone else who just wants to draw Belle in peace doesn’t need this hanging over their head
261 notes · View notes