#how to upload video on youtube without copyright
Explore tagged Tumblr posts
d2atech · 7 months ago
Text
How to upload videos on YouTube
YouTube is the world’s most popular video sharing platform, in fact it is the most used social app and website in the world. Working on it can make you rich. However, YouTube has been updated so many times by 2024 that it cannot be estimated. YouTube has now become such a platform from where you can start your business. If you work on YouTube, YouTube will definitely give you money in return. Therefore, YouTube has become a source of income for people. So much can be earned from YouTube that it cannot be estimated. Everyone can create a YouTube channel. But not everyone knows the right way to upload videos. YouTube has given so many features and settings to upload videos. You can also give information about the video to YouTube first so that YouTube can send your video to the right people. So today we will know what is the right way to upload a video to YouTube. Before uploading the video, also know from where you will upload the video. You can upload the video from the YouTube app as well as from YouTube studio and also on the YouTube website. The right way to upload videos is used by big YouTubers, so today we are going to tell you the method.
1. Select video:
Before uploading the video, what to upload. Use the method to make sure, whether you want to upload a long video or a short video on YouTube, you should know in advance. From where you want to upload the video, you want to select the video from there, the video should be uploaded from the YouTube app because this is what big YouTubers do. To upload from the YouTube app, go to the YouTube app, click on the + icon and select the video. Select the time, YouTube will ask you, do you want to upload a video or a short video. After this, you select the video, then click on next, you will reach the page to add the description.
Choose the title of the video:
Before selecting it, think carefully because it will play the biggest role in bringing views to the video, so if you search for something, the title you have entered will be true. The title should be short and clear so that the reader can get an idea of ​​the video. The video works like a title key word, so before entering the title, do search on the title. You can create title100 later.
Video thumbnail:
If you want more and more clicks on the video, then the thumbnail has to be made attractive so that the audience will see it, like if they see the thumbnail, they will definitely want to know what is inside the video. Thumbnail plays a very big role in making the video viral. If people never see them and nothing comes to their mind, then there are less chances of the video going viral. The better the thumbnail, the better the views.
How to add thumbnail: After choosing the video to add thumbnail, you will get the option of add detail. On the add detail page, thumbnail is automatically created along with your video. You have to remove it and put a new one there. On the default thumbnail, there will be a pencil icon. As soon as you click on it, the gallery’s background will open. From here you can add your thumbnail. Thumbnail size 1280 * 720 pixels Short video thumbnail 16:9 aspect ratio
Add Description:
Visibility:
Setting visibility is also very important. Whenever you upload a video, unlist the video and then upload it because just uploading the video is not enough. A lot of settings have to be done for it like making it public and adding members and also listing it.
On making the video public: This video will reach your subscribers. The video should be made public only when you are fully ready to upload the video, that is, after doing all the settings, after adding all the tags, after adding hashtags, you will upload the video, only then you should make it public.
Members Only: With this function, you can make the video private and show it to whomever you want to show it. This is a paid option by joining the members. To become a member, you have to pay some amount.
Making the video unlisted: means that until you are completely confirmed to make the video public, until you unlisted it, the video will be uploaded but the video will not reach your subscribers and common people. When it is uploaded, only you can see it.
Private: By making the video private, the video will reach only those people to whom you want to show it.
Schedule: With this feature, you can send your video to people as per your convenience, that is, you can make the video public at a specific time.
Location: If you want to show a location in your video, then you can use this feature.
Add to playlist: If you have already created a playlist in which you want to add your video, then you can add your video to the playlist from Add Playlist, otherwise you can create a separate playlist from here.
Allow video and audio remix : If you are adding a short video then you can use this feature, with this you can add music to the video and increase or decrease the background music, you can also use this in long videos for background music.
Add paid promotion label : If you have promoted any ad in the video, then for that you will have to click on Yash here, otherwise you can click on Joe.
Comment box : If you want people to comment on the video, then you can turn on this comment box, otherwise turn it off
YouTube Studio Settings : Before uploading a video, there are a lot of settings to be done, which we can also do from YouTube Studio. YT Studio also has its own options for description, visibility at playlist, and alternate content. There is something new in it like monetization, audience, more option, take categories, show how many views like this video, and allow embedding.
Monitization : If your channel is monetized then this option will be showing to you, otherwise it will not be showing, you can turn it on.
Audience : If your video is only for children then you can do yes, otherwise do this
Age restriction (advance): If your video has adult content then you can use this function
More Option:  
Tag: You can enter tags and keywords related to your video here. Tags and keywords are also important because it helps in ranking your video and it appears higher in many searches.
Category: With this option, you can select the history of your video, that is, you can tell what your channel is related to.
Show like: With this you can hide the text on your video.
YouTube video upload settings on Chrome: The best way to upload videos is to upload videos sequentially or from Google. You can also do it from your phone. For this, you will have to open the mobile in desktop mode in chrome settings. You can directly upload videos through it. The settings that you get on Google, you will get them on the YouTube app and the YouTube studio app. So, if you don’t want to do it on both of them, you can directly upload it from Google. The advantage of this is that now instead of opening two apps, you can upload your videos from just one Google account. For this, you will have to keep your Gmail account logged in on Google and you can upload the video by going to the YouTube site.
Features on Google Chrome:
Video title
Description
Visibility
Playlist
Audience
Paid promotion
Altered content
Tags
Short remixing
Category
Comments
Show how many views like this video
You will not get this feature on YouTube mobile app and YT Studio app.
End screen: With this feature, you can show your other video on the screen at the end of your YouTube video so that the other person can watch your other video as well.
Cards: With the Cards feature, you can show the “I” button in your video. You can give the link of any of your videos in this button. You can use this feature at any place in the video. Also you can add playlist, channel list and any link in the “I” button.
Language and caption Certification: With this feature you can add language to your video. You can write automatic captions.
Recording date and location: Add when and where your video was recorded, viewers can search for video by location.
License and distribution : 1. Standard YouTube license 2.Creative common attribution (”I choose the Standard YouTube license to upload videos”)
Subtitles : You can add subtitles in YouTube Studio on Chrome without any third-party apps
Editor : From here you can also edit the video in which you will get all these features:
Trim & cuts
Blur
Audio
End screen
Tumblr media
0 notes
mostlysignssomeportents · 1 year ago
Text
Copyright takedowns are a cautionary tale that few are heeding
Tumblr media
On July 14, I'm giving the closing keynote for the fifteenth HACKERS ON PLANET EARTH, in QUEENS, NY. Happy Bastille Day! On July 20, I'm appearing in CHICAGO at Exile in Bookville.
Tumblr media
We're living through one of those moments when millions of people become suddenly and overwhelmingly interested in fair use, one of the subtlest and worst-understood aspects of copyright law. It's not a subject you can master by skimming a Wikipedia article!
I've been talking about fair use with laypeople for more than 20 years. I've met so many people who possess the unshakable, serene confidence of the truly wrong, like the people who think fair use means you can take x words from a book, or y seconds from a song and it will always be fair, while anything more will never be.
Or the people who think that if you violate any of the four factors, your use can't be fair – or the people who think that if you fail all of the four factors, you must be infringing (people, the Supreme Court is calling and they want to tell you about the Betamax!).
You might think that you can never quote a song lyric in a book without infringing copyright, or that you must clear every musical sample. You might be rock solid certain that scraping the web to train an AI is infringing. If you hold those beliefs, you do not understand the "fact intensive" nature of fair use.
But you can learn! It's actually a really cool and interesting and gnarly subject, and it's a favorite of copyright scholars, who have really fascinating disagreements and discussions about the subject. These discussions often key off of the controversies of the moment, but inevitably they implicate earlier fights about everything from the piano roll to 2 Live Crew to antiracist retellings of Gone With the Wind.
One of the most interesting discussions of fair use you can ask for took place in 2019, when the NYU Engelberg Center on Innovation Law & Policy held a symposium called "Proving IP." One of the panels featured dueling musicologists debating the merits of the Blurred Lines case. That case marked a turning point in music copyright, with the Marvin Gaye estate successfully suing Robin Thicke and Pharrell Williams for copying the "vibe" of Gaye's "Got to Give it Up."
Naturally, this discussion featured clips from both songs as the experts – joined by some of America's top copyright scholars – delved into the legal reasoning and future consequences of the case. It would be literally impossible to discuss this case without those clips.
And that's where the problems start: as soon as the symposium was uploaded to Youtube, it was flagged and removed by Content ID, Google's $100,000,000 copyright enforcement system. This initial takedown was fully automated, which is how Content ID works: rightsholders upload audio to claim it, and then Content ID removes other videos where that audio appears (rightsholders can also specify that videos with matching clips be demonetized, or that the ad revenue from those videos be diverted to the rightsholders).
But Content ID has a safety valve: an uploader whose video has been incorrectly flagged can challenge the takedown. The case is then punted to the rightsholder, who has to manually renew or drop their claim. In the case of this symposium, the rightsholder was Universal Music Group, the largest record company in the world. UMG's personnel reviewed the video and did not drop the claim.
99.99% of the time, that's where the story would end, for many reasons. First of all, most people don't understand fair use well enough to contest the judgment of a cosmically vast, unimaginably rich monopolist who wants to censor their video. Just as importantly, though, is that Content ID is a Byzantine system that is nearly as complex as fair use, but it's an entirely private affair, created and adjudicated by another galactic-scale monopolist (Google).
Google's copyright enforcement system is a cod-legal regime with all the downsides of the law, and a few wrinkles of its own (for example, it's a system without lawyers – just corporate experts doing battle with laypeople). And a single mis-step can result in your video being deleted or your account being permanently deleted, along with every video you've ever posted. For people who make their living on audiovisual content, losing your Youtube account is an extinction-level event:
https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online
So for the average Youtuber, Content ID is a kind of Kafka-as-a-Service system that is always avoided and never investigated. But the Engelbert Center isn't your average Youtuber: they boast some of the country's top copyright experts, specializing in exactly the questions Youtube's Content ID is supposed to be adjudicating.
So naturally, they challenged the takedown – only to have UMG double down. This is par for the course with UMG: they are infamous for refusing to consider fair use in takedown requests. Their stance is so unreasonable that a court actually found them guilty of violating the DMCA's provision against fraudulent takedowns:
https://www.eff.org/cases/lenz-v-universal
But the DMCA's takedown system is part of the real law, while Content ID is a fake law, created and overseen by a tech monopolist, not a court. So the fate of the Blurred Lines discussion turned on the Engelberg Center's ability to navigate both the law and the n-dimensional topology of Content ID's takedown flowchart.
It took more than a year, but eventually, Engelberg prevailed.
Until they didn't.
If Content ID was a person, it would be baby, specifically, a baby under 18 months old – that is, before the development of "object permanence." Until our 18th month (or so), we lack the ability to reason about things we can't see – this the period when small babies find peek-a-boo amazing. Object permanence is the ability to understand things that aren't in your immediate field of vision.
Content ID has no object permanence. Despite the fact that the Engelberg Blurred Lines panel was the most involved fair use question the system was ever called upon to parse, it managed to repeatedly forget that it had decided that the panel could stay up. Over and over since that initial determination, Content ID has taken down the video of the panel, forcing Engelberg to go through the whole process again.
But that's just for starters, because Youtube isn't the only place where a copyright enforcement bot is making billions of unsupervised, unaccountable decisions about what audiovisual material you're allowed to access.
Spotify is yet another monopolist, with a justifiable reputation for being extremely hostile to artists' interests, thanks in large part to the role that UMG and the other major record labels played in designing its business rules:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
Spotify has spent hundreds of millions of dollars trying to capture the podcasting market, in the hopes of converting one of the last truly open digital publishing systems into a product under its control:
https://pluralistic.net/2023/01/27/enshittification-resistance/#ummauerter-garten-nein
Thankfully, that campaign has failed – but millions of people have (unwisely) ditched their open podcatchers in favor of Spotify's pre-enshittified app, so everyone with a podcast now must target Spotify for distribution if they hope to reach those captive users.
Guess who has a podcast? The Engelberg Center.
Naturally, Engelberg's podcast includes the audio of that Blurred Lines panel, and that audio includes samples from both "Blurred Lines" and "Got To Give It Up."
So – naturally – UMG keeps taking down the podcast.
Spotify has its own answer to Content ID, and incredibly, it's even worse and harder to navigate than Google's pretend legal system. As Engelberg describes in its latest post, UMG and Spotify have colluded to ensure that this now-classic discussion of fair use will never be able to take advantage of fair use itself:
https://www.nyuengelberg.org/news/how-explaining-copyright-broke-the-spotify-copyright-system/
Remember, this is the best case scenario for arguing about fair use with a monopolist like UMG, Google, or Spotify. As Engelberg puts it:
The Engelberg Center had an extraordinarily high level of interest in pursuing this issue, and legal confidence in our position that would have cost an average podcaster tens of thousands of dollars to develop. That cannot be what is required to challenge the removal of a podcast episode.
Automated takedown systems are the tech industry's answer to the "notice-and-takedown" system that was invented to broker a peace between copyright law and the internet, starting with the US's 1998 Digital Millennium Copyright Act. The DMCA implements (and exceeds) a pair of 1996 UN treaties, the WIPO Copyright Treaty and the Performances and Phonograms Treaty, and most countries in the world have some version of notice-and-takedown.
Big corporate rightsholders claim that notice-and-takedown is a gift to the tech sector, one that allows tech companies to get away with copyright infringement. They want a "strict liability" regime, where any platform that allows a user to post something infringing is liable for that infringement, to the tune of $150,000 in statutory damages.
Of course, there's no way for a platform to know a priori whether something a user posts infringes on someone's copyright. There is no registry of everything that is copyrighted, and of course, fair use means that there are lots of ways to legally reproduce someone's work without their permission (or even when they object). Even if every person who ever has trained or ever will train as a copyright lawyer worked 24/7 for just one online platform to evaluate every tweet, video, audio clip and image for copyright infringement, they wouldn't be able to touch even 1% of what gets posted to that platform.
The "compromise" that the entertainment industry wants is automated takedown – a system like Content ID, where rightsholders register their copyrights and platforms block anything that matches the registry. This "filternet" proposal became law in the EU in 2019 with Article 17 of the Digital Single Market Directive:
https://www.eff.org/deeplinks/2018/09/today-europe-lost-internet-now-we-fight-back
This was the most controversial directive in EU history, and – as experts warned at the time – there is no way to implement it without violating the GDPR, Europe's privacy law, so now it's stuck in limbo:
https://www.eff.org/deeplinks/2022/05/eus-copyright-directive-still-about-filters-eus-top-court-limits-its-use
As critics pointed out during the EU debate, there are so many problems with filternets. For one thing, these copyright filters are very expensive: remember that Google has spent $100m on Content ID alone, and that only does a fraction of what filternet advocates demand. Building the filternet would cost so much that only the biggest tech monopolists could afford it, which is to say, filternets are a legal requirement to keep the tech monopolists in business and prevent smaller, better platforms from ever coming into existence.
Filternets are also incapable of telling the difference between similar files. This is especially problematic for classical musicians, who routinely find their work blocked or demonetized by Sony Music, which claims performances of all the most important classical music compositions:
https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music
Content ID can't tell the difference between your performance of "The Goldberg Variations" and Glenn Gould's. For classical musicians, the best case scenario is to have their online wages stolen by Sony, who fraudulently claim copyright to their recordings. The worst case scenario is that their video is blocked, their channel deleted, and their names blacklisted from ever opening another account on one of the monopoly platforms.
But when it comes to free expression, the role that notice-and-takedown and filternets play in the creative industries is really a sideshow. In creating a system of no-evidence-required takedowns, with no real consequences for fraudulent takedowns, these systems are huge gift to the world's worst criminals. For example, "reputation management" companies help convicted rapists, murderers, and even war criminals purge the internet of true accounts of their crimes by claiming copyright over them:
https://pluralistic.net/2021/04/23/reputation-laundry/#dark-ops
Remember how during the covid lockdowns, scumbags marketed junk devices by claiming that they'd protect you from the virus? Their products remained online, while the detailed scientific articles warning people about the fraud were speedily removed through false copyright claims:
https://pluralistic.net/2021/10/18/labor-shortage-discourse-time/#copyfraud
Copyfraud – making false copyright claims – is an extremely safe crime to commit, and it's not just quack covid remedy peddlers and war criminals who avail themselves of it. Tech giants like Adobe do not hesitate to abuse the takedown system, even when that means exposing millions of people to spyware:
https://pluralistic.net/2021/10/13/theres-an-app-for-that/#gnash
Dirty cops play loud, copyrighted music during confrontations with the public, in the hopes that this will trigger copyright filters on services like Youtube and Instagram and block videos of their misbehavior:
https://pluralistic.net/2021/02/10/duke-sucks/#bhpd
But even if you solved all these problems with filternets and takedown, this system would still choke on fair use and other copyright exceptions. These are "fact intensive" questions that the world's top experts struggle with (as anyone who watches the Blurred Lines panel can see). There's no way we can get software to accurately determine when a use is or isn't fair.
That's a question that the entertainment industry itself is increasingly conflicted about. The Blurred Lines judgment opened the floodgates to a new kind of copyright troll – grifters who sued the record labels and their biggest stars for taking the "vibe" of songs that no one ever heard of. Musicians like Ed Sheeran have been sued for millions of dollars over these alleged infringements. These suits caused the record industry to (ahem) change its tune on fair use, insisting that fair use should be broadly interpreted to protect people who made things that were similar to existing works. The labels understood that if "vibe rights" became accepted law, they'd end up in the kind of hell that the rest of us enter when we try to post things online – where anything they produce can trigger takedowns, long legal battles, and millions in liability:
https://pluralistic.net/2022/04/08/oh-why/#two-notes-and-running
But the music industry remains deeply conflicted over fair use. Take the curious case of Katy Perry's song "Dark Horse," which attracted a multimillion-dollar suit from an obscure Christian rapper who claimed that a brief phrase in "Dark Horse" was impermissibly similar to his song "A Joyful Noise."
Perry and her publisher, Warner Chappell, lost the suit and were ordered to pay $2.8m. While they subsequently won an appeal, this definitely put the cold grue up Warner Chappell's back. They could see a long future of similar suits launched by treasure hunters hoping for a quick settlement.
But here's where it gets unbelievably weird and darkly funny. A Youtuber named Adam Neely made a wildly successful viral video about the suit, taking Perry's side and defending her song. As part of that video, Neely included a few seconds' worth of "A Joyful Noise," the song that Perry was accused of copying.
In court, Warner Chappell had argued that "A Joyful Noise" was not similar to Perry's "Dark Horse." But when Warner had Google remove Neely's video, they claimed that the sample from "Joyful Noise" was actually taken from "Dark Horse." Incredibly, they maintained this position through multiple appeals through the Content ID system:
https://pluralistic.net/2020/03/05/warner-chappell-copyfraud/#warnerchappell
In other words, they maintained that the song that they'd told the court was totally dissimilar to their own was so indistinguishable from their own song that they couldn't tell the difference!
Now, this question of vibes, similarity and fair use has only gotten more intense since the takedown of Neely's video. Just this week, the RIAA sued several AI companies, claiming that the songs the AI shits out are infringingly similar to tracks in their catalog:
https://www.rollingstone.com/music/music-news/record-labels-sue-music-generators-suno-and-udio-1235042056/
Even before "Blurred Lines," this was a difficult fair use question to answer, with lots of chewy nuances. Just ask George Harrison:
https://en.wikipedia.org/wiki/My_Sweet_Lord
But as the Engelberg panel's cohort of dueling musicologists and renowned copyright experts proved, this question only gets harder as time goes by. If you listen to that panel (if you can listen to that panel), you'll be hard pressed to come away with any certainty about the questions in this latest lawsuit.
The notice-and-takedown system is what's known as an "intermediary liability" rule. Platforms are "intermediaries" in that they connect end users with each other and with businesses. Ebay and Etsy and Amazon connect buyers and sellers; Facebook and Google and Tiktok connect performers, advertisers and publishers with audiences and so on.
For copyright, notice-and-takedown gives platforms a "safe harbor." A platform doesn't have to remove material after an allegation of infringement, but if they don't, they're jointly liable for any future judgment. In other words, Youtube isn't required to take down the Engelberg Blurred Lines panel, but if UMG sues Engelberg and wins a judgment, Google will also have to pay out.
During the adoption of the 1996 WIPO treaties and the 1998 US DMCA, this safe harbor rule was characterized as a balance between the rights of the public to publish online and the interest of rightsholders whose material might be infringed upon. The idea was that things that were likely to be infringing would be immediately removed once the platform received a notification, but that platforms would ignore spurious or obviously fraudulent takedowns.
That's not how it worked out. Whether it's Sony Music claiming to own your performance of "Fur Elise" or a war criminal claiming authorship over a newspaper story about his crimes, platforms nuke first and ask questions never. Why not? If they ignore a takedown and get it wrong, they suffer dire consequences ($150,000 per claim). But if they take action on a dodgy claim, there are no consequences. Of course they're just going to delete anything they're asked to delete.
This is how platforms always handle liability, and that's a lesson that we really should have internalized by now. After all, the DMCA is the second-most famous intermediary liability system for the internet – the most (in)famous is Section 230 of the Communications Decency Act.
This is a 27-word law that says that platforms are not liable for civil damages arising from their users' speech. Now, this is a US law, and in the US, there aren't many civil damages from speech to begin with. The First Amendment makes it very hard to get a libel judgment, and even when these judgments are secured, damages are typically limited to "actual damages" – generally a low sum. Most of the worst online speech is actually not illegal: hate speech, misinformation and disinformation are all covered by the First Amendment.
Notwithstanding the First Amendment, there are categories of speech that US law criminalizes: actual threats of violence, criminal harassment, and committing certain kinds of legal, medical, election or financial fraud. These are all exempted from Section 230, which only provides immunity for civil suits, not criminal acts.
What Section 230 really protects platforms from is being named to unwinnable nuisance suits by unscrupulous parties who are betting that the platforms would rather remove legal speech that they object to than go to court. A generation of copyfraudsters have proved that this is a very safe bet:
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
In other words, if you made a #MeToo accusation, or if you were a gig worker using an online forum to organize a union, or if you were blowing the whistle on your employer's toxic waste leaks, or if you were any other under-resourced person being bullied by a wealthy, powerful person or organization, that organization could shut you up by threatening to sue the platform that hosted your speech. The platform would immediately cave. But those same rich and powerful people would have access to the lawyers and back-channels that would prevent you from doing the same to them – that's why Sony can get your Brahms recital taken down, but you can't turn around and do the same to them.
This is true of every intermediary liability system, and it's been true since the earliest days of the internet, and it keeps getting proven to be true. Six years ago, Trump signed SESTA/FOSTA, a law that allowed platforms to be held civilly liable by survivors of sex trafficking. At the time, advocates claimed that this would only affect "sexual slavery" and would not impact consensual sex-work.
But from the start, and ever since, SESTA/FOSTA has primarily targeted consensual sex-work, to the immediate, lasting, and profound detriment of sex workers:
https://hackinghustling.org/what-is-sesta-fosta/
SESTA/FOSTA killed the "bad date" forums where sex workers circulated the details of violent and unstable clients, killed the online booking sites that allowed sex workers to screen their clients, and killed the payment processors that let sex workers avoid holding unsafe amounts of cash:
https://www.eff.org/deeplinks/2022/09/fight-overturn-fosta-unconstitutional-internet-censorship-law-continues
SESTA/FOSTA made voluntary sex work more dangerous – and also made life harder for law enforcement efforts to target sex trafficking:
https://hackinghustling.org/erased-the-impact-of-fosta-sesta-2020/
Despite half a decade of SESTA/FOSTA, despite 15 years of filternets, despite a quarter century of notice-and-takedown, people continue to insist that getting rid of safe harbors will punish Big Tech and make life better for everyday internet users.
As of now, it seems likely that Section 230 will be dead by then end of 2025, even if there is nothing in place to replace it:
https://energycommerce.house.gov/posts/bipartisan-energy-and-commerce-leaders-announce-legislative-hearing-on-sunsetting-section-230
This isn't the win that some people think it is. By making platforms responsible for screening the content their users post, we create a system that only the largest tech monopolies can survive, and only then by removing or blocking anything that threatens or displeases the wealthy and powerful.
Filternets are not precision-guided takedown machines; they're indiscriminate cluster-bombs that destroy anything in the vicinity of illegal speech – including (and especially) the best-informed, most informative discussions of how these systems go wrong, and how that blocks the complaints of the powerless, the marginalized, and the abused.
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never
Tumblr media
Image: EFF https://www.eff.org/files/banner_library/yt-fu-1b.png
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
677 notes · View notes
yuikomorii · 2 months ago
Note
did Rejet delete ataraaxia‘s account just like Robert Rogers 😭?
// Probably yeah. Although I get a lot of people can’t buy the games, I’m really against the idea of posting gameplays on YouTube, for the reason that this is a place Rejet is actively monitoring.
Western fans often don’t realize how differently Japanese companies operate. They tend to be much more conservative and place a stronger emphasis on copyright protection, so uploading content without permission directly violates their intellectual property rights. Many otome games also feature popular voice actors, and their contracts usually include strict clauses regarding how and where their voice work can be distributed. It’s not necessarily that the VAs don’t want others to experience the games, it’s just that allowing that kind of distribution would break the terms of their agreements.
If Rejet finds out that someone has uploaded the full gameplay on YouTube, the best-case scenario is that they’ll just take down the videos or the entire channel. The worst-case scenario? You could be sued and face legal consequences, even if you didn’t have bad intentions. T-T
And if this kind of behavior continues, Rejet might respond by region-locking their content again, just like they did a few years ago. 🥲
95 notes · View notes
tuesdayisfordancing · 2 months ago
Note
Unfortunate as it is, copyright law is the only practical leverage most people have to fight against tech companies scraping their work for commercial usage without their permission, especially people who also don't have union power to leverage either. Even people who prefer to upload their work for free online shouldn't be taken advantage of; Just because something is available for free online doesn't mean that it's freely available for someone to profit from in any way, especially if the author did not authorize it.
Okay Nonny. Bear with me, you’re not gonna like how I start this and probably not how I finish it either, but I do have a point in the middle. So.
There is in fact long established precedent for people being allowed to profit off of various uses of others’ work without permission, in ways that creative types in general and fandom specifically tend to wholeheartedly approve of. Parody, collage, fanart commissions, unauthorized merch, monetized reaction or analysis videos on youtube, these are significantly clearer cut examples of actually *using* copyrighted material in your own work than the generative ai case. And except for fanart commissions and unauthorized merch, which mostly live off of copyright holders staying cool about it, these are all explicitly permitted under copyright law.
Now, the generative ai case has some conflicting factors around it. On the one hand, it’s not only blatantly transformative to the point where the dataset cannot be recognized in the end result (and when it overfits and comes out with something not sufficiently transformative, that’s covered by preexisting copyright law), it also doesn’t exactly *use* the copyrighted work the way other transformative uses do. A parody riffs off a particular other work, or a few particular other works. A collage or a reaction video uses individual pieces of other works. Generative AI doesn’t do that, it comes up with patterns based on having looked at what a huge number of other works have in common. Like if a formulaic writing/art advice book were instead a robot artist. But on the other hand, the AI that was trained is potentially being used to compete in the same market as the work it was trained on. That “competition in the same market” element is why fan merch and fanart commissions rely on sufferance, rather than legality. That’s part of fair use too. So perhaps there’s some case to be made against AI from that perspective. *But*… the genAI creations, while competing in the same market as some of their training data, are *a lot more different from that training data* than a fanart is from an official art. To a significant degree the most similar comparison here isn’t other types of transformative work it’s… a person who learns to write by reading a lot. They’ll end up competing in the same market as some of *their* training data too. But of course that doesn’t *feel* the same. For starters, that’s *one person* adding themselves to the competition pool. An AI is adding *everyone who uses the AI* to the competition pool. It may be a similar process, but the end result is much more disruptive. Generative AI is going to make making a living off art even harder - and even finding cool *free* art harder - by flooding the market with crap at a whole new scale. That sucks! It’s shitty, and it feels hideously unfair that it uses artists’ work to do it, and people have decided to label this unfairness “theft”. Now, I do not think that is an accurate label and I’ve reached the point of being really frustrated and annoyed about it, on a personal level. Not all things that are unfair are theft and just saying “theft” louder each time is not actually an argument for why something should be considered theft. An analogy I like here: If someone used art you made to make a collage campaigning against your right to make that art (I can picture some assholes doing this with, say, selfies of drag queens), that would feel violating. It would feel unfair. It would suck! But it wouldn’t be theft or plagiarism.
…*And* on whatever hand we’re on now, my own first thought *was* “Okay well, on the one hand when you look at the mechanics this is pretty obviously less infringing than collage or parody, which I don’t think should be banned, but… maybe we can make a special extra strict copyright that applies only to AI? Just because of how this sucks.” And you know, maybe I’m wrong about my current stance and that’s still a good idea! But there seems to be a lack of caution regarding what sorts of rulings are being invited. It seems like some people are running towards any interpretation of copyright that slows down AI, regardless of what *else* it implies. Maybe I’m wrong! I’m no expert. Maybe it’ll be fine and maybe I’m just too pissed at anti-ai shit to see this clearly. I really wish the AI people had done open calls requesting people to add their work to the datasets, for which I think they would have gotten a lot of uptake before the public turned against AI. Maybe if we do end up with copyright protections against AI training that’ll happen and everything’ll be drastically improved. I dunno.
But I get fucking nervous and freaked out at OTW sending DMCA takedowns as a form of agitation for increased copyright protection and I think that’s a reasonable emotional response.
59 notes · View notes
zeeph-containment-zone · 2 months ago
Text
VAPORWAVE/SYNTHWAVE LISTENERS:
I need you all to be aware that there seems to be a massive uptick in AI generated "music" clogging up youtube. As someone who's starting to get into writing their own vaporwave music and has wanted to write vaporwave for the past decade, this really upsets me for a lot of reasons!!!!
So I'm gonna show you how to spot these AI generated mixes::
WHILE WRITING THIS BLOGPOST I LEARNED MANY OF THESE SAME IDENTIFIERS APPLY TO WEIRDCORE / AMBIENT MIXES AS WELL!!!!
Tumblr media
1) No Tracklist
Many of the AI generated mixes have no tracklists attached to them whatsoever. In my personal opinion, this is one of the biggest red flags. Some uploaders have added "track lists" but the names of the songs are pretty nonsensical even by vaporwave standards (example: "Helicopter Fly") and have no artist credited.
2) Suspiciously Long Uploads
If you're anything like me, you probably have noticed all videos on youtube search if you type "vaporwave mix" into the search bar are 3 or more fucking hours long.
Tumblr media Tumblr media Tumblr media
I know there's some lengthy older mixes out there (example: Aisle 420, which has become a group listening background music staple whenever my friends and I play minecraft together!) but if it's fairly new and has a really obnoxious timestamp attached to it, be wary!
3) Extremely Specific Naming Scheme
Going to crossreference back to the images in number 2, one of the many things ALL these AI channels have in common is that they title all their mixes " Something In Wide Text [ YEAR ]" Again, this is something older mixes have done before. But there's a very noticable difference between genuine mixes that incorporate a year into the title and these, especially when you look at the upload dates and how the naming scheme is extremely similar across all the channels that exhibit these patterns and AI usage.
4) Year in the Thumbnail Image
Much like Number 3, but specifcally in regards to the thumbnail image. There'll be a four digit number plastered in big text across the thumbnail. I have no idea why they all do this.
Tumblr media Tumblr media
5) AI Generated Captions
Not just very likely AI generated, but also copypasta'd across every channel that's like this. Examples will speak better than explaining in text can.
Tumblr media Tumblr media Tumblr media Tumblr media
Some things to note here are a weird fixation on copyright, and the phrase "Reposting This Content In Any Form is Strictly Prohibited!" being a shared factor.
6) AI Generated Visuals
This one is pretty easy to spot unless you're running youtube in a tab in the background.
Tumblr media Tumblr media
7) Extremely "Samey" Sounding
So if you're like me and let youtube autoplay run while you're working, one of these may have decided to come on without you actually clicking on it. From what I've listened to when this happens, a lot of the music sounds very empty and "samey." As if you're listening to the same song for 10 minutes but it's actually all different songs. There's not much substance to any of the actual songs and also a very clear distinct lack of sampling. I don't want to include lack of sampling as it's own point, because it's entirely possible to compose music in-genre without sampling at all. However, none of these channels use samples. Not a single one, out of any of their several dozen three hour uploads.
8) Very Short Intervals Between Uploads
One of the things that made me start thinking "holy shit, are all these new playlists that're popping up AI generated?" was the upload dates. These channels will seeminly push out a new mix every couple of days, sometimes even every day.
Tumblr media Tumblr media
Bro, you are NOT writing that much over that period of time.
9) Sometimes, They'll Just Admit It
Tumblr media
youtube has also thankfully started flagging some of them with the tiny little disclaimer in the caption, which imo isn't enough seeing as they paste a whole AI generated novel in the caption half the time but at least it's something.
Channels I Can 100% Confirm Are AI:
•Retropical Records
-alts: Eternal Past (Weirdcore/Ambient) Nebula Breeze (Jazz?) and Dunes of Time (??? I can't be assed to click on any of those videos to find out, not gonna lie.)
•Utopic.Dreamer
•Luminescence
•Music Farm
•dream.surfer
•devs.fm
• FOR WEIRDCORE LISTENERS: aurora.heaven
-likely run by whoever's running utopic and dream surfer
Thank you so much for reading this through to the end. It's such a shame to see this genre go down this path and I hope we as creators can do something to offset it.
May your journey into the eternal mall be pleasant and AI-Free <3
50 notes · View notes
sachisei · 11 months ago
Text
arcanistsanctum is gone. we're back.
Update here.
Message below the cut is outdated.
Depressing news. I wake up today to not find the blog. I expected it to happen some time, but not this soon. If it got DMCA-ed, I'm supposed to receive a notice of the claims but I did not. So I plan to reach out to Tumblr Support soon regarding this.
Unfortunately, I didn't back up the blog. Not the contents nor the codes. I feel devastated. I regret not backing it up. I almost couldn't find the energy to claim it back. If we're not getting it back, honestly, I don't think I will be motivated enough to re-create it again. I'd say it was a good run and all that. But before that, I will try to get it back for you guys. And again, if not, I think the arcanistsanctum blog will deviate from archiving official media, especially if the reasons of the shutdown is DMCA or Copyright Infringement. I read the Reverse:1999 terms before, and all I can say is that they have ambiguous terms there, especially relating to copyright. Basically, they mentioned they reserve the right to take down content related to Reverse:1999 without explanation and warning. That may be the case here. I won't press too hard on that matter. It is their IP, after all. It is what it is.
This is the first time I had a tumblr blog taken down. I'm new to the processing and stuff, so (sigh) bear with me y'all.
Nothing changes for me uploading Reverse 1999 reaction videos in youtube. I'm still editing my reactions to 1.7 Version Update. I'll notify here once I publish the video.
I wish you guys well, and I apologize for the lack of preparation on my part. I'm not sure how soon we'll hear back from Tumblr. Until then, I can be contacted here. Of course, I'll update you guys on the status of arcanistsanctum here.
69 notes · View notes
mariacallous · 1 year ago
Text
YouTube’s Content ID system—which automatically detects content registered by rights holders—is “completely fucking broken,” a YouTuber called “Albino” declared in a rant on the social media site X that has been viewed more than 950,000 times.
Albino, who is also a popular Twitch streamer, complained that his YouTube video playing through Fallout was demonetized because a Samsung washing machine randomly chimed to signal a laundry cycle had finished while he was streaming.
Apparently, YouTube had automatically scanned Albino's video and detected the washing machine chime as a song called “Done”—which Albino quickly saw was uploaded to YouTube by a musician known as Audego nine years ago.
But when Albino hit Play on Audego's song, the only thing that he heard was a 30-second clip of the washing machine chime. To Albino it was obvious that Audego didn't have any rights to the jingle, which Dexerto reported actually comes from the song "Die Forelle" (“The Trout”) from Austrian composer Franz Schubert.
The song was composed in 1817 and is in the public domain. Samsung has used it to signal the end of a wash cycle for years, sparking debate over whether it's the catchiest washing machine song and inspiring at least one violinist to perform a duet with her machine. It's been a source of delight for many Samsung customers, but for Albino, hearing the jingle appropriated on YouTube only inspired ire.
"A guy recorded his fucking washing machine and uploaded it to YouTube with Content ID," Albino said in a video on X. "And now I'm getting copyright claims" while "my money" is "going into the toilet and being given to this fucking slime."
Albino suggested that YouTube had potentially allowed Audego to make invalid copyright claims for years without detecting the seemingly obvious abuse.
"How is this still here?" Albino asked. "It took me one Google search to figure this out," and "now I'm sharing revenue with this? That's insane."
At first, Team YouTube gave Albino a boilerplate response on X, writing, "We understand how important it is for you. From your vid, it looks like you've recently submitted a dispute. When you dispute a Content ID claim, the person who claimed your video (the claimant) is notified and they have 30 days to respond."
Albino expressed deep frustration at YouTube's response, given how "egregious" he considered the copyright abuse to be.
"Just wait for the person blatantly stealing copyrighted material to respond," Albino responded to YouTube. "Ah, OK, yes, I'm sure they did this in good faith and will make the correct call, though it would be a shame if they simply clicked ‘reject dispute,’ took all the ad revenue money and forced me to risk having my channel terminated to appeal it!! XDxXDdxD!! Thanks Team YouTube!"
Soon after, YouTube confirmed on X that Audego's copyright claim was indeed invalid. The social platform ultimately released the claim and told Albino to expect the changes to be reflected on his channel within two business days.
Ars could not immediately reach YouTube or Albino for comment.
Widespread Abuse of Content ID Continues
YouTubers have complained about abuse of Content ID for years. Techdirt's Timothy Geigner agreed with Albino's assessment that the YouTube system is "hopelessly broken," noting that sometimes content is flagged by mistake. But just as easily, bad actors can abuse the system to claim "content that simply isn’t theirs" and seize sometimes as much as millions in ad revenue.
In 2021, YouTube announced that it had invested "hundreds of millions of dollars" to create content management tools, of which Content ID quickly emerged as the platform's go-to solution to detect and remove copyrighted materials.
At that time, YouTube claimed that Content ID was created as a "solution for those with the most complex rights management needs," like movie studios and record labels whose movie clips and songs are most commonly uploaded by YouTube users. YouTube warned that without Content ID, "rights holders could have their rights impaired and lawful expression could be inappropriately impacted."
Since its rollout, more than 99 percent of copyright actions on YouTube have consistently been triggered automatically through Content ID.
And just as consistently, YouTube has seen widespread abuse of Content ID, terminating "tens of thousands of accounts each year that attempt to abuse our copyright tools," YouTube said. YouTube also acknowledged in 2021 that "just one invalid reference file in Content ID can impact thousands of videos and users, stripping them of monetization or blocking them altogether."
To help rights holders and creators track how much copyrighted content is removed from the platform, YouTube started releasing biannual transparency reports in 2021. The Electronic Frontier Foundation, a nonprofit digital rights group, applauded YouTube's "move towards transparency" while criticizing YouTube's "claim that YouTube is adequately protecting its creators."
"That rings hollow," the EFF reported in 2021, noting that "huge conglomerates have consistently pushed for more and more restrictions on the use of copyrighted material, at the expense of fair use and, as a result, free expression." As the EFF saw it then, YouTube's Content ID system mainly served to appease record labels and movie studios, while creators felt "pressured" not to dispute Content ID claims out of "fear" that their channel might be removed if YouTube consistently sided with rights holders.
According to YouTube, "it’s impossible for matching technology to take into account complex legal considerations like fair use or fair dealing," and that impossibility seemingly ensures that creators bear the brunt of automated actions even when it's fair to use copyrighted materials.
At that time, YouTube described Content ID as "an entirely new revenue stream from ad-supported, user generated content" for rights holders, who made more than $5.5 billion from Content ID matches by December 2020. More recently, YouTube reported that figure climbed above $9 billion, as of December 2022. With so much money at play, it's easy to see how the system could be seen as disproportionately favoring rights holders, while creators continue to suffer from income diverted by the automated system.
Despite YouTubers' ongoing frustrations, not much has changed with YouTube's Content ID system over the years. The language used in YouTube's most recent transparency report is largely a direct copy of the original report from 2021.
And while YouTube claims that the Content ID match technology should be "continually" adapted to sustain a "balanced ecosystem," the few most recent updates YouTube announced in 2022 didn't seem to do much to help creators dispute invalid claims.
"We’ve heard the Content ID Dispute process is top of mind for many of you," YouTube wrote in 2022. "You've shared that the process can take too long and can have long-term impact on your channel, specifically when claims result in viewing restrictions or monetization impact."
To address this, YouTube did not expedite the dispute process, which still allows up to 30 days for rights holders to respond. Instead, it expedited the appeals process, which happens after a rights holder rejects a disputed claim and arguably is the moment when the YouTuber's account is most in danger of being terminated.
"Now, the claimant will have 7 days instead of 30 to review the appeal before deciding whether to request a takedown of the video, release the claim, or let it expire," YouTube wrote in 2022. "We hope shortening the timespan of the appeals process helps you get claims resolved much faster!"
This update would only help YouTubers intent on disputing claims, like Albino was, but not the majority of YouTubers, whom the EFF reported were seemingly so intimidated by disputing Content ID claims that they more commonly just accepted "whatever punishment the system has levied against them." The EFF summarized the predicament that many YouTubers remain stuck in today:
There is a terrible, circular logic that traps creators on YouTube. They cannot afford to dispute Content ID matches because that could lead to DMCA notices. They cannot afford DMCA notices because those lead to copyright strikes. They cannot afford copyright strikes because that could lead to a loss of their account. They cannot afford to lose their account because they cannot afford to lose access to YouTube’s giant audience. And they cannot afford to lose access to that audience because they cannot count on making money from YouTube’s ads alone, partially because Content ID often diverts advertising money to rights holders when there is Content ID match. Which they cannot afford to dispute.
For Albino, who said he has fought back against many Content ID claims, the Samsung washing machine chime triggering demonetization seemed to be the final straw, breaking his patience with YouTube's dispute process.
"It's completely out of hand," Albino wrote on X.
Katharine Trendacosta, a YouTube researcher and the EFF's director of policy and advocacy, agreed with Albino, telling Ars that YouTube's Content ID system has not gotten any better over the years: “It's worse, and it's intentionally opaque and made to be incredibly difficult to navigate" for creators.
"I don't know any YouTube creator who's happy with the way Content ID works," Trendacosta told Ars.
But while many people think that YouTube's system isn't great, Trendacosta also said that she "can't think of a way to build the match technology" to improve it, because "machines cannot tell context." Perhaps if YouTube's matching technology triggered a human review each time, "that might be tenable," but "they would have to hire so many more people to do it."
What YouTube could be doing is updating its policies to make the dispute process less intimidating to content creators, though, Trendacosta told Ars. Right now, the bigger problem for creators, Trendacosta said her research has shown, is not how long it takes for YouTube to work out the dispute process but "the way YouTube phrases the dispute process to discourage you from disputing."
"The system is so discouraging," Trendacosta told Ars, with YouTube warning YouTubers that initiating a dispute could result in a copyright strike that terminates their accounts. "What it ends up doing is making them go, 'You know what, I'll eat it, whatever.'"
YouTube, which has previously dismissed complaints about the Content ID tool by saying "no system is perfect," did not respond to Ars' request for comment on whether any updates to the tool might be coming that might benefit creators. Instead, YouTube's plan seems to be to commiserate with users who likely can't afford to leave the platform over their concerns.
"Totally understand your frustration," Team YouTube told Albino on X.
36 notes · View notes
cheese-water · 2 years ago
Text
I see that the SSSniperWolf vs. Jacksfilms has made its way to tumblr so here’s a bit of context:
This began in late July when Jack published his first video criticizing SSSniperWolf tendency to steal others’ tiktoks without credit to make reaction videos for her 34.2 million subscriber yt channel. He goes into detail about how her using other peoples’ content without their knowledge or consent to generate massive amounts of fame and wealth should not be allowed or promoted on youtube, for both ethical and legal reasons. He would then post a second video a month later regarding other shady things SSSniperWolf has done in her reaction vids after thoroughly reviewing her recent uploads. And a third one was uploaded around two weeks ago concerning content thievery as a whole (and Jason Durulo) and how creators can combat it.
I highly suggest watching these three videos if you’re at all curious or a content creator yourself especially the third video. If you want, you can even check out Jack’s jjjacksfilms channel which uploads twitch highlights of Jack playing bbbingo with SSSniperwolf videos (shows ways to recognize signs of freebooted content while also having fun).
However, the best way to support the cause is by taking action. If you see your own content in someone else’s video without your permission, for instance your tiktok in a SSSniperwolf compilation, you have every right to file a copyright claim through youtube. We have seen this work before (the first instance of it happening with SSSniperwolf and the process to submit a copyright form through yt is described in Jack’s third video), which only gives us hope that creators are not powerless over their own content. If you know or see someone’s work be stolen or not credited by channels like SSSniperwolf, please inform them if they weren’t already aware of their stolen content and their ability to file a removal request if they do wish. You can not file a copyright claim for someone else.
And with Jack getting fucking doxxed by SSSniperWolf herself, there’s no better time than now to spread the word.
Credit the Creators.
143 notes · View notes
kaizsche · 8 months ago
Note
Any chance you’ll upload the full video of the Twisters featurette? 🥺
omg hii!!! honestly idk how upload them without fear of copyright if it will get taken down??? idk. maybe i'll try on youtube/twitter/google drive. i'll just post the link when it's done!
8 notes · View notes
lobautumny · 2 months ago
Text
So, Alex Avila uploaded a very good 3-hour-long video yesterday about AI. This toy can't meaningfully summarize it because it's very long and dense, but it's absolutely worth the watch for anyone who cares at all about the argument surrounding AI. The video is largely about the most common narratives you see people use against AI, dissecting the core reasons people believe them, the rhetorical devices behind them, the problems/limitations of these arguments, who is propagating them, and who stands to benefit from them. The video is not demonizing AI detractors, but rather pointing out that there is a kind of carelessness with how people tend to talk about the subject, and this carelessness can be/is being abused by bad actors. A particularly noteworthy example is how ultramassive media conglomerates like Disney and Warner are using the fact that "AI art is theft" is a very common talking point as a wedge to try to massively strengthen copyright law, which could lead to such outcomes as entire art styles being copyrightable if they get their way, and this plan relies on the masses uncritically believing the narrative that all use of stable diffusion constitutes copyright infringement, or that if it doesn't already, it should.
Once again, the video is very good and absolutely worth the watch, and this toy came out of it with two major thoughts:
Damn, it really wants to make some art now.
It really wants to make a post detailing its own thoughts on AI because the video has given it a lot of food for thought.
This toy fucking hates the current landscape of discussion surrounding AI. You're not allowed to hold a nuanced opinion. If you express any skepticism towards the capabilities of AI or criticism towards the ways in which the technology is being marketed to and utilized by consumers, the AI bros will ostracize you for not being a true believer. If you express any sentiment that AI may have legitimate use-cases, you get ostracized by the anti-AI crowd because everyone assumes you're a corporate bootlicker who fully supports every aspect of the industry. And so, no intelligent discussion can occur, and everything is bound to just continue to slowly get worse until we all die.
Here's a concept: You can simultaneously believe that the technology that allows people to communicate with each other over the internet and share files and stream videos and host livestreams and whatnot is useful and valuable while also believing that there are massive problems with platforms like Facebook, Twitter, Instagram, Youtube, Tiktok, Snapchat, Discord, Twitch, Reddit, Tumblr, and many more, and that a lot of the time, these platforms and the corporations that run them serve as a force of evil within our society. The fact that these platforms are designed to foster extremely unhealthy relationships between the users and the technology, radicalize people into dangerous ideologies (which has directly caused a very large number of real deaths), and contribute to the dystopian mass surveillance state we now live in, does not mean we need to throw out the baby with the bathwater and claim that there is no possible legitimate value to be found in the concept of online platforms that let you communicate and share files/videos/whatever with other people.
One would think this is obvious, yet to many people, this exact logic simply does not apply when the subject is AI.
Yes, AI products have an overwhelming tendency to be predatory and falsely marketed. Yes, AI is being used by businesses to cut costs (especially in customer service) by performing mass layoffs without an equivalent number of new jobs opening up, and this practice is also typically making the services these companies offer substantially worse as a direct result of this. Yes, AI is being used to cynically commodify art. Yes, the industry behind LLMs is pushing non-solutions to problems we've already solved while also creating a highly-predatory industry marketing AI companions to the lonely and desperate. Yes, AI is being used for social media scams and the propagation of misinformation. Yes, AI does use energy (though as Alex's video covers, we don't actually have any concrete idea how much energy it uses and most of the statistics that have been circulating are either pointless conjecture or sourced from the energy industry, which has a direct financial incentive to exaggerate projected energy demands). Yes, ChatGPT being made 4% less racist came at a massive cost of real people in underpriveleged countries performing menial labor sifting through industrial quantities of horrible and abusive content for slave wages. Yes, there are many more problems this toy could point out, but this paragraph is getting pretty long at this point.
But does all of this mean that the technology is inherently worthless and should not exist under any circumstances? No. In fact, aside from a few hyper-specific issues (such as google image search results getting clogged with AI slop), the vast majority of these issues aren't actually new, once again, as Alex's video points out. They're exacerbations of pre-existing problems within our society, much like the problems that have arisen with social media.
If this toy wants to be morally/principally consistent and commit itself to arguing rationally in good faith, then:
It cannot, as a copyright abolitionist and collage artist who is a fan of Youtube Poop, argue that the use of stable diffusion is akin to theft.
It cannot, as someone who understands how the technology works, argue that the use of stable diffusion is akin to plagiarism.
It cannot be a proponent of features in art programs like Photoshop's content-aware fill tool (which has existed since long before the AI craze, for the record) and say that it sees no potential for technology like stable diffusion to be useful as a tool within the creation of art.
It cannot be a fan of Marcel Duchamp and claim that stable diffusion inherently cannot be a valid artistic medium.
It cannot look at the myriad of ways in which neural networks are being used by scientists and mathematicians right now to solve problems that were previously impossible (or at least practically infeasible) without this technology and say that neural networks are pointless.
It cannot look at the genuine use-cases for LLMs (primarily extremely deep processing of mass quantities of highly-varied data) and say that LLMs are worthless.
So, it is forced to believe that yes, there is, in fact, value to be had in this technology. But make no mistake, it is not a corporate bootlicker, nor is it trying to enlightened-centrist "there's good points on both sides" the argument. As this toy hopes it has made abundantly clear by now, it is highly critical of the ways in which large corporations are pushing this technology on people and the ways in which the average end user is using it. It is possible to simultaneously believe that there is genuine artistic value to be had in stable diffusion and that people trying to cynically use it to replace artists and automate creativity, envisioning a future in which there are no artists at all, is profoundly shitty and tangibly hurting artists. It is possible to see value in the potential of LLMs as a scientific tool while also pointing out how stupid and predatory LLM chatbots are, and how they are currently aiding in the spread of mass misinformation, and how people using them to cheat on their college homework is bad.
And no, the fact that AI is used in so many harmful ways does not, in fact, make absolutely everyone who uses this technology to any extent for any reason bad. Going back to the social media comparison, this toy believes it is extremely irrational and hypocritical to believe that everyone who uses AI at all is ontologically evil if you do not also believe that everyone who uses Twitter at all is ontologically evil.
In conclusion, sometimes subjects are nuanced and require you to form complicated and inconvenient opinions, lest we all become a bunch of reactionary shitheads who argue in bad faith trying to hastily justify our initial knee-jerk reactions to things without ever forming a deeper understanding of what we are reacting to.
3 notes · View notes
Text
Video Agent: The Future of AI-Powered Content Creation
Tumblr media
The rise of AI-generated content has transformed how businesses and creators produce videos. Among the most innovative tools is the video agent, an AI-driven solution that automates video creation, editing, and optimization. Whether for marketing, education, or entertainment, video agents are redefining efficiency and creativity in digital media.
In this article, we explore how AI-powered video agents work, their benefits, and their impact on content creation.
What Is a Video Agent?
A video agent is an AI-based system designed to assist in video production. Unlike traditional editing software, it leverages machine learning and natural language processing (NLP) to automate tasks such as:
Scriptwriting – Generates engaging scripts based on keywords.
Voiceovers – Converts text to lifelike speech in multiple languages.
Editing – Automatically cuts, transitions, and enhances footage.
Personalization – Tailors videos for different audiences.
These capabilities make video agents indispensable for creators who need high-quality content at scale.
How AI Video Generators Work
The core of a video agent lies in its AI algorithms. Here’s a breakdown of the process:
1. Input & Analysis
Users provide a prompt (e.g., "Create a 1-minute explainer video about AI trends"). The AI video generator analyzes the request and gathers relevant data.
2. Content Generation
Using GPT-based models, the system drafts a script, selects stock footage (or generates synthetic visuals), and adds background music.
3. Editing & Enhancement
The video agent refines the video by:
Adjusting pacing and transitions.
Applying color correction.
Syncing voiceovers with visuals.
4. Output & Optimization
The final video is rendered in various formats, optimized for platforms like YouTube, TikTok, or LinkedIn.
Benefits of Using a Video Agent
Adopting an AI-powered video generator offers several advantages:
1. Time Efficiency
Traditional video production takes hours or days. A video agent reduces this to minutes, allowing rapid content deployment.
2. Cost Savings
Hiring editors, voice actors, and scriptwriters is expensive. AI eliminates these costs while maintaining quality.
3. Scalability
Businesses can generate hundreds of personalized videos for marketing campaigns without extra effort.
4. Consistency
AI ensures brand voice and style remain uniform across all videos.
5. Accessibility
Even non-experts can create professional videos without technical skills.
Top Use Cases for Video Agents
From marketing to education, AI video generators are versatile tools. Key applications include:
1. Marketing & Advertising
Personalized ads – AI tailors videos to user preferences.
Social media content – Quickly generates clips for Instagram, Facebook, etc.
2. E-Learning & Training
Automated tutorials – Simplifies complex topics with visuals.
Corporate training – Creates onboarding videos for employees.
3. News & Journalism
AI-generated news clips – Converts articles into video summaries.
4. Entertainment & Influencers
YouTube automation – Helps creators maintain consistent uploads.
Challenges & Limitations
Despite their advantages, video agents face some hurdles:
1. Lack of Human Touch
AI may struggle with emotional nuance, making some videos feel robotic.
2. Copyright Issues
Using stock footage or AI-generated voices may raise legal concerns.
3. Over-Reliance on Automation
Excessive AI use could reduce creativity in content creation.
The Future of Video Agents
As AI video generation improves, we can expect:
Hyper-realistic avatars – AI-generated presenters indistinguishable from humans.
Real-time video editing – Instant adjustments during live streams.
Advanced personalization – AI predicting viewer preferences before creation.
2 notes · View notes
glassprism · 1 year ago
Note
I may not explain this the best but are there any sort of unofficially "public domain" bootlegs? Like older ones that have been posted to YouTube or have had so many clips posted from them that people simply don't care anymore? Or are people who post clips without explicit permission but they aren't really leaks because they've already been publicly shared? I'm only asking because I'm unfamiliar with the expectations of trading and curious about how those things are viewed
No worries, I think I get what you're asking, and it's something I've discussed in other places, like Discord, but not so much here.
Anyway, I think Phantom is an interesting show in that it's so old, and the people in charge of copyright striking bootlegs are so lax, that I think there's less fuss in general about people uploading clips or even the full show to YouTube compared to newer shows where the creators are more likely to go after bootlegs and their recorders. So yes, it should count as a "leak", but as the saying goes - if a bootleg is uploaded and the filmer is no longer around, does it create any sort of trouble for them?
Another thing too is that many of the original filmers have long since left trading and unlikely to care what happens to their videos, which again, is just because Phantom is an older show (I know of one Phantom filmer who has actually passed away, that's how old some filmers are). It also helps that Phantom has had many international productions that closed years ago, which again, means it's less likely that there's anybody in the company who's going to come after a bootleg being uploaded. Like, do we really think the original Hamburg production, which closed in 2001, is going to go after the filmer of a bootleg from 1997? Probably not, so nobody's really going to be be a bother if you upload a video of it.
(Incidentally, this is also probably why members of the cast and crew will start leaking audios and videos - after enough time has passed, who cares? I'm almost certain the Mexico City proshot of Phantom was from a cast or crew member, as was the monitor video of the original Stockholm production, and I think everyone knows by now that it was Anton Zetterholm himself who decided to toss out the proshot of the Stockholm revival.)
Additionally, other filmers might be around but are not on social media because again, an older show means an older generation of filmers, meaning even if they did care about their stuff getting uploaded, well, it's unlikely they'll find out. Compare this to a musical like Six the Musical where the fans tend to be newer and younger, meaning it's very likely that the filmers are newer and younger, are still in the trading and bootlegging scene, are very active and adept at moving about on social media, and will take serious umbrage if you go uploading their stuff, especially combined with the fact that the company is more likely to come after them for bootlegging.
And yes, with some bootlegs, they have been uploaded so much it feels like you're fighting against the current to even go against it, especially when there are no repercussions. For example, the video of the original Broadway cast of Phantom has been posted so much, and by multiple people, that I don't think anybody cares at this point. That can certainly count as what you might consider "public domain" videos in that so many people have it, have uploaded it, or have seen it, that it's just taken as a free for all, do-what-you-want video.
That said, there are some unofficial rules that people tend to follow. Stuff filmed by people who are still active in the trading scene tends to not get uploaded. Stuff from post 2019 or so is a bit uncommon, again tying into the fact that people who filmed more recent boots are still very much around. If a filmer asks you to take something down, take it down. Name it something obscure (but that does not involve the words "slime tutorial" anywhere).
Anyway, I hope that answers your question!
25 notes · View notes
greatwyrmgold · 6 months ago
Text
I've been thinking about Team Four Star.
For reasons I'm not privy to, TFSdecided to end their flagship Dragon Ball Z Abridged series six years ago. On Christmas day, 2018, they uploaded the Cell Arc epilogue, and that was that. For a while there were plans to do the Bojack movie and the Buu Arc, but I guess they lost their passion or struggled over creative disagreements or just got sick of Shueisha's copyright claims.
So TFS pivoted to other creative projects, hoping to use the skills they developed over a decade of abridging anime and the audience they gathered to launch new projects. How did those go?
DBZA season 3 videos generally have 10-20 million views. The roughly contemporaneous Hellsing Ultimate Abridged and FF7 Machinabridged series have less; HUA hovers around ten million views an episode, FF7M about a million. TFS also have a few other anime-parody-ish projects, like the intermittent "X Minutes" series. "Demon Slayer in 6 Minutes!" has about six million views, but their other X Minutes (and similar videos with different branding) hover around a million views.
TFS tried tried all-original projects, too. They've made a few video essays; the best-performing by far is "Beastars is Beastars," with almost 800K views. The others have about an eighth to a quarter as many views.
More notable are the TFS Originals, which includes individual skits and whole series alike. Most of these do about as well as the video essays, with view counts between 60k and 140k. The pilots for Fist Master and Unabridged have almost half a million views each. Unabridged is by far the most popular of their series; I don't know if that's because more than two episodes were made, or if those episodes were made because it was more popular.
Good numbers by the standards of general YouTube. Not so good by the standards of channels with 4.3 million subscribers and a series which drew tens of millions of views for a decade.
And of course, TFS still does Dragon Ball stuff. There are the Dragon ShortZ, and the HFIL series. Both start with view counts above five million, with the most recent entries having around a million views. That's lower than any DBZA episode or movie, even the awkwardly-bifurcated Christmas Tree of Might, but it's better than anything else they've been uploading.
Well, until the Buu Bits came along. Little fragments of what DBZA Season 4 might have been, they were made for TotallyNotMark's Buu Arc retrospective. And people loved them. View counts for most individual shorts are above two million, and the full compilation video has six and a half million views at time of writing.
That's wildly better than basically anything TFS has done since Hellsing Ultimate Abridged. The Buu Bits compilation was even more popular than the last episodes of Dragon Ball Abridged Kai, a series only technically distinct from DBZA.
This is what their audience wants.
Team Four Star wants to move on. They started three original animated series between 2018 and 2021, for Kai's sake! But they only got 1-2 episodes each, probably because animation is hard. Team Four Star isn't just a bunch of peers and friends making silly internet videos any more; it's their job.
If TFS is going to spend all the time and effort and money needed to make more DieselDust, they need to be able to see some return on that investment. They can't pay everyone a living wage with less than a hundred thousand views of ad revenue, sponsorships aren't gonna pay much without more viewers, and good luck selling Kadence merch.
You know what does pay? More stuff that's DBZA-adjacent, but original enough that it doesn't trip YouTube's copyright bots. Team Four Star's fans will come back and watch Abridged!Cell and Abridged!Guru and the other villains tooling around in HFIL, and maybe some will buy an orange!Piccolo "DODGE" shirt or buy their sponsor's stuff while they're there.
That's sad. Team Four Star aren't master storytellers, but they shouldn't have to be to pursue their passion. They have so much more backing them up than 99.9% of artists; they have a platform and millions of fans and more than a decade of work proving their skills.
But that's not enough. They tried, and they failed, and they're returning to the old hits. Their original series aren't unsuccessful, but they're not successful enough to justify making more. TFS gets more eyeballs on their DBZA creator commentary videos than they do for original series.
TFS didn't want to make more DBZA (if they did, they would have), so they compromised. Most of their videos from the past couple years are Dragon-Ball-related in some way; HFIL skits, or Buu Bits, or Dragon ShortZ, or the aforementioned creator commentary. They wrapped up Unabridged last year; that was their most successful original series, and they haven't replaced it with anything.
I'm not sure how to wrap this up. I've followed TFS for about a decade, for most of DBZA season 3 and beyond. I wish I could say I've watched them grow creatively, but while they're clearly more skilled than they were when they made DBZA season 1, they've been stuck in a rut by their material circumstances and business needs. HFIL and Dragon ShortZ are basically DBZA, but freed from the limitations of editing DBZ footage and sticking to DBZ's story.
4 notes · View notes
cinesexual · 6 months ago
Text
Before YouTube deleted my account and banned me, this was the most commented-upon film on my channel. Find out why.
Discover thousands more gay-themed movies.
From Perplexity:
Under the Waters is a 2022 short film directed by Ambiecka Pandit that explores complex themes of queer desire and the emotional turmoil surrounding it. The film, which runs for 17 minutes, has garnered attention for its sensitive portrayal of a sexual transgression between two teenage boys during a family vacation at the beach.
Plot Overview
The narrative centers on Sarang, a pubescent boy, and Mihir, an older teenage male, as they navigate their feelings in the context of a seaside family gathering. The film captures the tension between anger and longing, fear and denial, as Mihir attempts to teach Sarang how to swim. This seemingly innocent interaction escalates when Mihir's hand slips, leading to a moment of inappropriate contact that profoundly affects both boys. Sarang's internal conflict is palpable as he grapples with feelings of horror, desire, and betrayal, ultimately retreating into silence and confusion as their relationship shifts dramatically123.
Themes and Style
Ambiecka Pandit’s direction is noted for its empathetic approach, allowing the audience to witness the subtle complexities of teenage desire without moralizing the experience. The film delves into the confusion and shame often associated with emerging sexuality, portraying the characters' emotional states through nuanced performances and intimate cinematography. The editing is tight, ensuring that each moment contributes to the overall emotional landscape, creating a sense of urgency and tension that culminates in a powerful climax23.
Critical Reception
The film has received positive reviews for its bold storytelling and the performances of its lead actors, particularly Nishant Bhavsar as Sarang. Critics have praised Bhavsar for his ability to convey a wide range of emotions, from vulnerability to burgeoning confidence as he navigates his feelings for Mihir. The film has been showcased at notable festivals, including the Indian Film Festival of Los Angeles and the Dharamshala International Film Festival, highlighting its relevance in contemporary discussions about queer narratives in cinema123.Overall, Under the Waters stands out as a poignant exploration of desire and the complexities of adolescent relationships, making it a significant contribution to the landscape of queer cinema in India.
Defend culture, not copyright.
2 notes · View notes
earlgreytea68 · 2 years ago
Text
In one of my classes I do a little unit on bootleg concert videos posted on YouTube and whether they should be considered copyright infringement and my students are all so unconcerned about this question and I am just like, ...DO YOU NOT SURVIVE ON CONCERT VIDEOS UPLOADED BY KIND SOULS TO YOUTUBE???? HOW CAN YOU POSSIBLY ENVISION LIFE WITHOUT THEM?????
Like I think it is horrifying to contemplate the disappearance of concert videos from our lives, and this was before (tour)dust! This year I was teaching this unit right after the Metro show and all I was watching was just the video some wonderful person uploaded of the Metro show, like, I just watched it over and over as comfort television, and my students just looked so bewildered at me, like, "who cares this much about this?"
It must be relaxing to not be obsessed with a band, you must get so much more done lolol
31 notes · View notes
samhaft · 1 year ago
Note
hello!!
is it (legally) required to have some sort of permission to upload or publish remixes of hazbin hotel songs?
Short answer yes, long answer it depends what you mean by publish, and I love that I get in the weeds with music business stuff.
So - the recordings themselves and all pieces of them, like the instrumental stems and vocals, are all part of the A24-owned “master recordings” which you cannot use or sample or remix without a license from them. Technically you could make an un-monetized video or tiktok or something and probably won’t get in trouble for it, but they’ll have the rights to monetize it or take it down. Spotify/distrokid/any form of real distribution you for sure cannot do without a license.
That’s remixes - but covers are interesting. You can actually formally distribute song covers thanks to something called a “mandatory mechanical license” whereby if your distributor gives you the option (I know distrokid does) you can mark that the song is a cover, and it’ll fetch a cover license (which are generally automatically approved) from (usually) the Harry Fox Agency in the US. Now this does NOT give you permission to remix, interpolate, change, or put the song to video - THAT requires a license from the master recording owner (as described above) as well as the copyright owner (song copyright ownership is also known as ‘publishing’). This is how Taylor Swift was able to re-record her “Taylor’s Versions” of songs - she didn’t have control over her master recordings, but because she owned the copyright to those songs, she was able to make new master recordings from scratch.
Ultimately fanmade remixes often do not get taken down or get copyright strikes unless the remix in question is being monetized and distributed to platforms, because generally it is in content owners’ interest to allow social media users to make “UGC” (user generated content) as ultimately it’s more attention to that piece of music, but it IS up to the content owner’s discretion to decide to take action against it or not - and sometimes systems like YouTube’s Content ID will flag and fingerprint that content automatically without the content owner needed to see or ID the thing you made.
And unfortunately I cannot personally help anyone get cover or remix licenses to the Hazbin songs, that’s all A24 Music - I’ve been getting a lot of emails to that effect and I’m sorry I can’t be more helpful there.
Hope this was informative!
10 notes · View notes