#create AI videos with Google
Explore tagged Tumblr posts
wherenightmaresroost · 3 months ago
Text
the problem with ai isn't that it's ai it's
evil companies pushing ai to devalue labor and creative products.
misconceptions about how ai works, leading to people attributing it an intelligence and sentience it does not have, which feeds into
misinformation, the decrease in effort needed to create disinformation, and the sudden increase of skill needed to spot falsified info. 3a. this includes content creators using ai to flood searches with low-quality articles and inaccurate photos, people not being transparent when they use ai for their images, and things that make it harder to do casual research online
lower threshold for doing hard work that polishes skill, leading to over-reliance on a very flawed tool.
the tool itself isn't the problem. it just exacerbates things that were already problems before.
33 notes · View notes
vague-humanoid · 7 months ago
Text
At the California Institute of the Arts, it all started with a videoconference between the registrar’s office and a nonprofit.
One of the nonprofit’s representatives had enabled an AI note-taking tool from Read AI. At the end of the meeting, it emailed a summary to all attendees, said Allan Chen, the institute’s chief technology officer. They could have a copy of the notes, if they wanted — they just needed to create their own account.
Next thing Chen knew, Read AI’s bot had popped up inabout a dozen of his meetings over a one-week span. It was in one-on-one check-ins. Project meetings. “Everything.”
The spread “was very aggressive,” recalled Chen, who also serves as vice president for institute technology. And it “took us by surprise.”
The scenariounderscores a growing challenge for colleges: Tech adoption and experimentation among students, faculty, and staff — especially as it pertains to AI — are outpacing institutions’ governance of these technologies and may even violate their data-privacy and security policies.
That has been the case with note-taking tools from companies including Read AI, Otter.ai, and Fireflies.ai.They can integrate with platforms like Zoom, Google Meet, and Microsoft Teamsto provide live transcriptions, meeting summaries, audio and video recordings, and other services.
Higher-ed interest in these products isn’t surprising.For those bogged down with virtual rendezvouses, a tool that can ingest long, winding conversations and spit outkey takeaways and action items is alluring. These services can also aid people with disabilities, including those who are deaf.
But the tools can quickly propagate unchecked across a university. They can auto-join any virtual meetings on a user’s calendar — even if that person is not in attendance. And that’s a concern, administrators say, if it means third-party productsthat an institution hasn’t reviewedmay be capturing and analyzing personal information, proprietary material, or confidential communications.
“What keeps me up at night is the ability for individual users to do things that are very powerful, but they don’t realize what they’re doing,” Chen said. “You may not realize you’re opening a can of worms.“
The Chronicle documented both individual and universitywide instances of this trend. At Tidewater Community College, in Virginia, Heather Brown, an instructional designer, unwittingly gave Otter.ai’s tool access to her calendar, and it joined a Faculty Senate meeting she didn’t end up attending. “One of our [associate vice presidents] reached out to inform me,” she wrote in a message. “I was mortified!”
24K notes · View notes
thecoppercompendium · 8 months ago
Text
So, you want to make a TTRPG…
Tumblr media
Image from Pexels.
I made a post a long while back about what advice you would give to new designers. My opinions have changed somewhat on what I think beginners should start with (I originally talked about probability) but I thought it might be useful to provide some resources for designers, new and established, that I've come across or been told about. Any additions to these in reblogs are much appreciated!
This is going to be a long post, so I'll continue beneath the cut.
SRDs
So, you have an idea for a type of game you want to play, and you've decided you want to make it yourself. Fantastic! The problem is, you're not sure where to start. That's where System Reference Documents (SRDs) can come in handy. There are a lot of games out there, and a lot of mechanical systems designed for those games. Using one of these as a basis can massively accelerate and smooth the process of designing your game. I came across a database of a bunch of SRDs (including the licenses you should adhere to when using them) a while back, I think from someone mentioning it on Tumblr or Discord.
SRDs Database
Probability
So, you have a basic system but want to tweak it to work better with the vision you have for the game. If you're using dice, this is where you might want to consider probability. Not every game needs this step, but it's worth checking that the numbers tell the story you're trying to tell with your game. For this, I'll link the site I did in that first post, AnyDice. It allows you to do a lot of mathematical calculations using dice, and see the probability distribution that results for each. There's documentation that explains how to use it, though it does take practice.
AnyDice
Playtesting
So you've written the rules of your game and want to playtest it but can't convince any of your friends to give it a try. Enter Quest Check. Quest Check is a website created by Trekiros for connecting potential playtesters to designers. I can't speak to how effective it is (I've yet to use it myself) but it's great that a resource like it exists. There's a video he made about the site, and the site can be found here:
Quest Check
Graphic Design and Art
Game is written and tested? You can publish it as-is, or you can make it look cool with graphics and design. This is by no means an essential step, but is useful if you want to get eyes on it. I've got a few links for this. First off, design principles:
Design Cheatsheet
Secondly, art. I would encourage budding designers to avoid AI imagery. You'll be surprised how good you can make your game look with only shapes and lines, even if you aren't confident in your own artistic ability. As another option, public domain art is plentiful, and is fairly easy to find! I've compiled a few links to compilations of public domain art sources here (be sure to check the filters to ensure it's public domain):
Public Domain Sources 1
Public Domain Sources 2
You can also make use of free stock image sites like Pexels or Pixabay (Pixabay can filter by vector graphics, but has recently become much more clogged with AI imagery, though you can filter out most of it, providing it's tagged correctly).
Pexels
Pixabay
Fonts
Turns out I've collected a lot of resources. When publishing, it's important to bear in mind what you use has to be licensed for commercial use if you plan to sell your game. One place this can slip through is fonts. Enter, my saviour (and eternal time sink), Google Fonts. The Open Font License (OFL) has minimal restrictions for what you can do with it, and most fonts here are available under it:
Google Fonts
Publishing
So, game is designed, written, and formatted. Publishing time! There are two places that I go to to publish my work: itch.io and DriveThruRPG. For beginners I would recommend itch - there's less hoops to jump through and you take a much better cut of what you sell your games for, but DriveThruRPG has its own merits (@theresattrpgforthat made great posts here and here for discovering games on each). Itch in particular has regular game jams to take part in to inspire new games. I'll link both sites:
itch.io
DriveThruRPG
Finally, a bunch of other links I wasn't sure where to put, along with a very brief summary of what they are.
Affinity Suite, the programs I use for all my layout and designing. Has an up-front cost to buy but no subscriptions, and has a month-long free trial for each.
Affinity Suite
A database of designers to be inspired by or work with. Bear in mind that people should be paid for their work and their time should be respected.
Designer Directory
An absolute behemoth list of resources for TTRPG creators:
Massive Resources List
A site to make mockups of products, should you decide to go that route:
Mockup Selection
A guide to making published documents accessible to those with visual impairments:
Visual Impairment Guidelines
A post from @theresattrpgforthat about newsletters:
Newsletter Post
Rascal News, a great place to hear about what's going on in the wider TTRPG world:
Rascal News
Lastly, two UK-specific links for those based here, like me:
A list of conventions in the UK & Ireland:
Convention List
A link to the UK Tabletop Industry Network (@uktabletopindustrynetwork) Discord where you can chat with fellow UK-based designers:
TIN Discord
That's all I've got! Feel free to reblog if you have more stuff people might find useful (I almost certainly will be)!
465 notes · View notes
alcrego · 11 months ago
Note
So to clarify, you use AI to assist making your art, but the AI is also trained solely on your art? And people are mad about that?
Yes. And actually a VERY small % of my WHOLE work that I post since >10 years ago is AI assisted. I would say a 2-5%. All the rest is photography/video/ gif art created by myself.
And even so, when I post them they are NEVER raw images, but always used as “ingredients” in the same way I always used my own photography and video to achieve my OWN ideas.
Art is about ideas, not about creating random images…
-
Im sure people dont use stock images or images from Google to do their work… Im sure!😂👍
Most, if not everybody who critizices AIs never took their time to UNDERSTAND that it can be used with and for OWN work. I never agreed, accepted nor defended other people stealing art to train Custom Models.
Thanks for the question.🙏
437 notes · View notes
clever-ludicrous · 4 months ago
Text
How to Actually Learn a Language (Without Wasting Time)
Polyglots will do anything to sell you something, so here’s the fastest and most basic technique based on my research.
Step 1 – Getting the Absolute Basics In
This is where most people already get lost. If you search social media for how to start, the advice isn’t necessarily bad, but it often makes you dependent on a single resource, usually an app that will eventually try to charge you. Duolingo, for example, has turned into a mega-corporation that perfected gamification to keep you on the app.
Remember: free apps make money by keeping you on their platform, not by helping you become fluent.
At this stage, the goal is not to gain conversational skills but to avoid overwhelming yourself and get a feel for what you’re actually getting into. All my recommended resources are free because I believe learning a language should be a basic right. I wouldn’t advise spending any money until you’re sure you’ll stick with it. Otherwise, it can turn into a toxic “but I paid for this, so I have to keep going” mindset that drains all the fun out of learning.
• Language Transfer – Highly recommended for Spanish, Arabic, Turkish, German, Greek, Italian, Swahili, and French.
• Textbooks – Simply search for [language] textbook PDF, or check LibGen and the Internet Archive. Don’t overthink which book to choose—it doesn’t matter much.
• Podcasts – Coffee Break is a solid choice for many languages.
• YouTube Channels – Join r/Learn[language] on Reddit and find recommendations.
Step 2 – The 20/80 Principle
The idea is that 20% of words make up 80% of everyday speech.
What you’re going to do:
Search “Most common words [language] PDF”.
This list is now your best friend
For flashcards, I highly recommend AnkiPro. It lets you import pre-made lists for Anki/Quizlet and has an archive where you’ll definitely find the most common words. But it lacks audio. The real Anki program has it, but only on PC (unless you’re willing to pay $30 for the mobile app). Use AnkiPro for now—we’ll come back to repeating phrases later. In the meantime, find a YouTube video with the most common words pronounced, or use Google Translate for audio.
(Knowt is a free alternative for Quizlet if you prefer that)
These lists will spare you from learning unnecessary vocabulary at this stage. Spaced repetition (which Anki uses) can take longer, but it’s worth it because you want these words to stick. Anki will only introduce a small number of new words per day. Once you start new words, write phrases using them. Doesn’t matter if they’re random just try to use them.
Step 3 – The First Breakup With the Language
This isn’t really a step, but I have to mention it. For me (and for other language learners I’ve talked to) this is where motivation crashes.
The dopamine rush is over. Your ego boost is gone. You’re stuck understanding just enough to notice how much you don’t understand, and topics are getting more complex. Everything feels overwhelming, and motivation drops.
This is normal. You have to push through it.
I’ll write a separate post on how I manage this phase, but for now:
• Take a step back and make sure you understand the basics.
• Find something that keeps you motivated.
• Consistency is key. Even if it’s just five minutes a day, do it. (Edit: You can search online for inspiration on scheduled plans. I found one that organizes language exercises into different categories based on how much time you have each day, which seems helpful. https://www.reddit.com/r/languagelearning/s/sSGUtORurM
Personally, I used AI to create a weekly plan kind of as a last resort before giving up on the language, but try looking for pre-made ones first.)
I personally enjoyed story learning during this phase. And don’t forget the frequency lists are still your best friend. For story learning check out Olly Richards books!
Step 4 – Immersion
Your brain needs active and passive immersion. The earlier steps were mostly active, and now you’ll start the fun part.
How to Immerse Yourself:
1. Join some kind of community.
• I enjoy Reddit/ r/lean[Language]. Do this in your target language, but also in the language you already speak. Post that you’re looking for a chat partner in your target language. The most people are nice, and the mean ones will just ghost you anyway.
2. Watch shows.
• Subtitles only in your target language or drop English subtitles ASAP.
3. Listen to podcasts.
4. Read
I personally dislike media made for kids (except on low-energy days). For real immersion, pick something for adults.
5. Translate, write, and speak.
Before this, you wrote simple sentences using vocabulary. Now, put them to work:
• Translate texts.
• Keep a diary.
• Write short stories.
• Complain about the language in the language.
It doesn’t matter, just use it.
Step 5 – Speaking
Start speaking earlier than you think you’re ready. Trust me. This is probably where most people disagree with me. I do think you should start by focusing on input, but the importance of output isn’t talked about enough.
Now, the real Anki (or any program with phrases + audio) comes into play. At lower levels, it doesn’t make sense to just start talking, since you wouldn’t even be able to recognize your mistakes. Here’s what you’ll do:
1. Repeat phrases out loud.
2. Record yourself speaking.
3. Compare your recording to the original audio and adjust your pronunciation.
If it’s a tonal language (or if you struggle with accents), start this even earlier.
Other Speaking Strategies:
• Shadowing – Repeat after native speakers.
• Reading aloud – Your own texts, books, anything.
• Talking to yourself.
• Talking to natives (if you’re brave).
I’m not here to fix social anxiety, but I am here to help with language learning, so just speak.
Final Thoughts
• These steps overlap, and that’s fine.
• This is supposed to be fun. Learning just because you’re “too deep in” or because of school won’t cut it.
• If you’re lost, take a step back.
• I’m not a professional. I just think a straight answer is way too hard to find.
If you have anything to add, feel free to share.
181 notes · View notes
mostlysignssomeportents · 4 months ago
Text
Skinnamarinkstump Linkdump
Tumblr media
I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me TODAY (Feb 15) for a virtual event with YANIS VAROUFAKIS, and on MONDAY (Feb 17) for an event at KEPLER'S in MENLO PARK with CHARLIE JANE ANDERS. More tour dates here.
Tumblr media
It's Saturday and I'm on a book tour, and the world is in chaos, and there are more links to write about than I could fit in to this week's newsletter, so time for a cubic linkdump, the 27th such:
https://pluralistic.net/tag/linkdump/
Let's start with the best thing I saw all week: a 3D-printed, spring-loaded, clockwork chess pawn that uses a magnet to sense when it has reached the end of the board and SPROING! turns into a queen:
https://www.youtube.com/watch?v=CSOnnle3zbA
The whole video is a fascinating account of the design process, from idea to prototype to finished item, but if you're impatient and want to skip right to the eyeball kick, it's at 12:27-12:35. And if you want to print your own, the files are $12 (cheap!):
https://www.patreon.com/WorksByDesign/shop/queen-pawn-3d-printing-files-614491?source=storefront
Regrettably, not every tech project is a good one. This week, Google abandoned its AI ethics pledge. Unlike most AI ethics pledge, which are full of nonsense about not accidentally creating a vengeful god that turns the human race into paperclips, Google's AI pledge was actually very important, in that the company promised not to make AI that violates human rights, international law, or privacy. There comes a point where harping on Google's abandoned "don't be evil" motto can feel a little hacky, but in this case, I'll make an exception. My EFF colleague Matthew Guariglia tears Google a much-deserved new AIhole over this latest heel turn:
https://www.eff.org/deeplinks/2025/02/google-wrong-side-history
Not all bad technology is evil. Some of it is merely very, very stupid. How stupid? Check out Thom Dunn's Wirecutter review of The Heatbit Trio, a space-heater that uses Bitcoin-mining GPUs to generate some of its heat, very slightly offsetting the cost of warming your room – but at a rate that would take decades to recoup the $700 price-tag. Thom got some spicy quotes from Molly White for this one – possibly the first time she's been cited in a home appliance review:
https://www.nytimes.com/wirecutter/reviews/heatbit-space-heater-review/
Staying with crypto freaks for a moment here, Adam Levitin dissects the cryptocurrency "industry"'s latest chorus of aggrieved whining over "debanking":
https://www.creditslips.org/creditslips/2025/02/debanked-by-the-market.html
As Levitin writes, banks aren't kicking cryptocurrency "companies" off their books because the government wants to punish them. Banks have a very good reason to want to avoid doing business with high-dollar scams that have highly correlated implosions, which is to say, times when everyone wants their money back from the cryptocurrency "company" the bank is handling charges for. For a longer explanation that gets into the nitty gritty of bank supervision, check out Patio11's excellent, detailed explainer:
https://www.bitsaboutmoney.com/archive/debanking-and-debunking/
As all the real heads know, "crypto means cryptography," and cryptographers continue to contrive privacy marvels. This week, Kagi – the best search engine, a million times better than Google – released a Privacy Pass authentication plugin, which lets you login to Kagi and run searches without Kagi being able to connect any of the searches you make with your account:
https://blog.kagi.com/kagi-privacy-pass
As an sf/crime writer who sometimes (often) searches for information on committing ghastly crimes and 'orrible murders, the fact that my favorite search engine will be technically incapable of tying those searches to my identity is quite a relief. Read my review of Kagi here:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
If you're one of those marvel-contriving hackers, cryptographers, security researchers or tinkerers, you should really consider attending this summer's Hackers on Planet Earth (HOPE), 2600 Magazine's (now) annual (formerly biennial) hacker con. They've just posted their CFP – get those submission in!
https://www.hope.net/cfp-talks.html
Well, I have to post this and get ready for this morning's virtual book tour event with Yanis Varoufakis:
https://www.youtube.com/watch?v=xkIDep7Z4LM
But before I go, one more link: Kevin Steele's 2005 essay on Hypercard, "When Multimedia Was Black & White," an absolute classic, and a beautiful meditation on the art and promise of early hypertext:
https://web.archive.org/web/20240213190609/http://www.kevinsteele.com/smackerel/black_white_00.html
I've known Kevin for most of my life, long before he helped found Mackerel, the pioneering Toronto multimedia company. Long after Mackerel, Kevin went on making wonderful things. In 2023, he published a monumental act of portraiture – a "sequential art" time-series of panoramas of Toronto's hip, ever-changing Queen Street West strip:
https://pluralistic.net/2023/09/13/spadina-to-bathurst/#dukes-cycle
Comparing Kevin's more recent work with that lovely old essay reveals deep correspondences and the progress of a unique and creative soul.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/02/15/intermixture/#debunking-debanking
Tumblr media
155 notes · View notes
probablyasocialecologist · 28 days ago
Text
On Monday, SAG-AFTRA filed an unfair labor practice charge with the National Labor Relations Board against Epic subsidiary Llama Productions for implementing an AI-generated Darth Vader voice in Fortnite on Friday without first notifying or bargaining with the union, as their contract requires. Llama Productions is the official signatory to SAG-AFTRA's collective bargaining agreement for Fortnite, making it legally responsible for adhering to the union's terms regarding the employment of voice actors and other performers.
[...]
The union's complaint comes just days after the feature sparked a separate controversy when players discovered that they could manipulate the AI into using profanity and inappropriate language until Epic quickly implemented a fix. The AI-controlled in-game character uses Google's Gemini 2.0 to generate dialogue and ElevenLabs' Flash v2.5 AI model trained on the voice of the late James Earl Jones to speak real-time responses to player questions. For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn't available—work that could vanish if AI replicas become the industry standard.
19 May 2025
105 notes · View notes
fixyourwritinghabits · 1 year ago
Text
Checking In
Good day my fellow exhausted creatives, it sure does be A Time we're going through. There is certainly a lot of things happening at once, and like many of you I'm struggling to stay afloat while desperately playing catch-up. I'll be honest, shit's pretty damn fucked up. Sometimes it helps to take a step back and reflect on some reminders.
Don't panic.
People are facing a lot of hard choices when it comes to what platforms to use, and I know it's pretty tempting to burn everything down. Take a deep breath and think about your options. Nightshade and Glaze aren't perfect, true, but they're open about their limitations and are still tools you can use. Look into alternative word processors beyond Google Docs that won't have AI-scraping. Take your time deciding what to do with your creative output and where to share it. I am Old, and I have seen several social media websites crash and burn. You will always have more options.
Take care of yourself first.
I've seen a lot of people burning themselves out hard over things they can't control. Gaza, anti-LGBTQ issues, American politics, it's a whole lot and it's all overwhelming. You cannot accomplish anything if you don't take the time to put your oxygen mask first. Eat, sleep, turn your phone off when you feel yourself being sucked in. This seems obvious, but it's often the hardest thing to do, believe me I know. You gotta keep yourself going before you can help others.
Small things still matter.
There's a lot of things you can still do even when you feel like you can't. You can sign petitions, you can promote the activism of others. Vote in local elections. Keep yourself informed without drowning - check your news sources once a day rather than all the time. Talk to your friends, spend time with your pets, find ways to help in your local community (a great place to find resources is your library!). Go for a walk with a trash picking tool and a garbage bag. A small difference is still a difference.
Recharge Creatively.
It can be hard to do creative things when you feel like there's so many other important things to do. But being creative - creating art, writing a story, doing a hobby - IS important to yourself and others. Sometimes you have to force yourself to do so - I have to put "watch a movie" on my to do list, or I'll never make time for it. Go to a coffee shop and make art. Play that new video game. Write that silly coffee shop AU. These things are important to you, and they will carry through with what you want to do for others.
Do what you can when you can and you will make it through.
417 notes · View notes
atalienart · 3 months ago
Note
i understand your feelings about AI. A friend of mine uses char.ai to talk to a character and they're like "haha its no big deal lol" but like.... ANY amount of ai is awful. ugh. the whole ghibli ai boom is the worst
Yeah, those comments under "ghiblified" Al tweets made me angry and then people giving advice how to use Al for "writing books" made it worse and then some video where a guy used Al images to "illustrate" a story (I'm not sure he was aware of the fact they were Al because they were "created" by his "hard working friend") made it even worse. Not to mention that annoying Al extension popping up on chrome every time I try to google something and Al generated poster for rabies vaccine they throw (the vaccine not the poster) around in my area made me scream (the screaming was because of the poster, not the vaccine). Anyway, yeah people behave like someone who kicks a dog and when you tell them "hey, it hurts the dog", they tell you to chill and say they can do whatever they like and besides, they like the sound the dog makes when they kick him. I wonder what's the appeal with talking to a character like that for your friend.
55 notes · View notes
jzprncess · 5 months ago
Text
max’s hair, max’s way
Tumblr media
pairing: max verstappen x reader
oneshot
word count: 2,489
summary : Y/N discovers an AI image of Max Verstappen with long hair and can’t stop imagining how amazing he’d look with it. After dropping subtle hints, Max finally catches on and humorously entertains the idea. What follows is a hilarious, over-the-top obsession with starting a fan club—Max’s hair revolution is coming, whether he’s ready or not.
note : this one was actually quite easy to write but then again im just in a mood to write so i finished it in a few hours. this was a request that was submitted on my google forms!
༻﹡﹡﹡﹡﹡﹡﹡༺
Y/N’s day had been nothing short of a mess. She'd woken up late for a Zoom call, spilled coffee on her favorite sweatshirt (the one she swore was invincible to stains), and had yet another online shopping cart full of things she definitely didn’t need, but had to have. It wasn’t even noon, and she was already on her third attempt at taking a nap that didn’t feel like an awkward lie-down.
But there was one thing that had the potential to make it all better: mindless scrolling.
Her thumb lazily flicked through TikTok, her mind barely engaged as she watched videos that made zero sense, but for some reason, her brain processed them like essential information. It was supposed to be a five-minute break—a little escape before diving back into her ocean of responsibilities. She figured she'd scroll, mindlessly and aimlessly, just to silence the chaos in her head.
But then... she saw it.
It wasn’t some cute puppy video or a cooking hack that would forever change her life. No, no. It was something far more dangerous, far more potent, and absolutely life-changing.
Max Verstappen.
But not just any Max. No, this was an alternate universe Max—a Max created by the magical, terrifying powers of AI. The Max on her screen had hair that cascaded in long, perfect waves, the kind you could only dream about, or maybe see on a runway model. His sharp jawline was even more defined than usual (which shouldn’t be possible, but here we are), and his eyes—those piercing blue eyes—looked even more mysterious, as though he were a brooding poet in an indie movie. He was staring at her, but also not staring at her, if you know what she meant.
And then she saw it.
The hair.
Max’s new look was a cascade of locks that would make any shampoo ad jealous. It was silky, voluminous, perfectly tousled like he’d just walked out of a windstorm of pure glamor. It was glorious. It was breathtaking.
Y/N stopped dead in her tracks. Her thumb froze mid-scroll. Her heart rate ticked up a few notches. Holy shit. She didn’t even care that she was in a coffee-stained hoodie and still hadn’t brushed her hair. Nothing mattered anymore, because here was Max Verstappen, looking like an absolute dreamboat in a way she never thought possible. This wasn’t the Max she’d seen on the racetrack—no, this was a Max that belonged in the front pages of a high fashion magazine, throwing a rebellious look over his shoulder like a 90s pop star.
She blinked, trying to process what she was seeing. Her fingers twitched, ready to swipe, but she couldn’t tear her eyes away. Max with long hair. Her mind couldn’t let go of the image. It was perfect. He was perfect.
She leaned closer to the screen, squinting to examine every glorious detail, every strand of hair that seemed to defy physics. Could he actually pull this off in real life? Her fingers hovered over the screenshot button for a moment before she snapped it without hesitation.
And then, she did what any sane person would do: she set the image as her phone wallpaper.
There was no going back now. She wasn’t just going to stare at this picture once and forget about it. No, Max Verstappen with long hair was going to become her new obsession. She’d stare at it every time she unlocked her phone, letting the image haunt her dreams. Maybe she’d make it her lock screen too, just to really solidify the insanity.
The idea of Max with long hair, that Max, consumed her. Every time she glanced at the picture, it felt like an out-of-body experience. Was this how people got obsessed with celebrity transformations? Because this was absolutely it. She wasn’t even mad about it. She was already thinking of all the ways she could drop this bombshell on Max—subtle, of course. It had to be subtle. But she had to let him know somehow.
“Maybe I could just send it to him,” she muttered aloud to no one. “No, no... way too obvious.”
A sly grin spread across her face. She wasn’t going to just send the picture. No, she had a better idea. Max wouldn’t even see it coming.
She looked at the time—still early afternoon. Plenty of time to start planning. Oh, this was going to be fun.
After setting the AI picture as her wallpaper for the seventh time that day, Y/N leaned back in her chair, the wheels in her mind turning at lightning speed.
She had the plan now. She wasn’t just going to sit back and hope Max would see the error of his short-haired ways. No, Y/N was going to subtly—so subtly—nudge him into realizing that long hair was, in fact, the future. She’d been around the block enough to know how to manipulate situations for her own personal benefit.
Okay, maybe "manipulate" was too harsh of a word, but it sounded cool.
“Step one,” she muttered to herself, “Casual comments.”
She scrolled through her texts, thinking about what would be the perfect, casual way to throw out the idea of hair transformation.
Max had no idea what was about to hit him.
Y/N had spent the better part of the evening staring at her phone, just waiting for Max to reply to her text. She had done it—sent the casual, completely not obvious message about how “some people” just looked so good with long hair. She leaned back in her chair, a deep sense of satisfaction settling in. There was no way Max could miss the hint. She had done it perfectly. It was subtle, yet not so subtle that it was too subtle. The emoji sealed the deal. 😏
Still, she couldn’t help herself. She had to check her phone again, just to make sure the message had landed.
The screen lit up with a notification from Max. Y/N’s heart did a little dance. Here we go. She clicked it open, already anticipating his response.
Max: "Haha, are you talking about me? I’m not sure I could pull off long hair..."
Y/N froze. The message was a lot more casual than she’d hoped for. She reread it, her eyes scanning for the tiniest hint of curiosity or intrigue, but all she found was... confusion?
What? She thought she’d laid it out perfectly. The whole mysterious vibe thing had been an obvious clue!
Still, she wasn’t going to give up. Not yet. The game had only just begun.
She sat there for a moment, staring at the screen like she was trying to solve an impossible puzzle. A plan. She needed a plan, and it needed to be more than just a text.
Her eyes darted around the room. The walls, the plants, the weirdly shaped lamp on her desk—all were silent witnesses to her genius, or lack thereof, depending on how things went. But then—a light bulb moment.
It was obvious. She wasn’t going to be able to hint at this through mere text alone. No, no. She needed to get creative. She needed to make him see it—to envision the hair that could change his life. This was the moment where her vision and Max’s reality collided.
A slow grin spread across Y/N’s face. She knew just what to do.
Step one: The Subtle Instagram Story.
It was genius. Max would never suspect it. After all, people posted memes, weird videos, and obscure thoughts all the time. But Y/N had something more—something that could convince him without even saying a word.
She snapped a picture of herself—looking effortlessly glamorous, of course—and started typing her story caption.
“Do you ever think about how long hair changes a whole vibe? Like, imagine you had long hair... just think about it... 🧐”
She paused, reading it over. Was this too much? Too obvious? Too ridiculous?
Nah. It was perfect.
She hit post and waited, staring at her phone screen as if it would reveal some deep, philosophical answer to the universe’s mysteries.
The next few minutes felt like an eternity. She could practically feel the electricity buzzing in the air. She didn’t even know if Max was online, but her brain couldn’t shut down. The message was out there now. The seed had been planted. She was too far gone to back out now.
And then, a notification buzzed. It was from Max. She checked it eagerly.
Max: “Is this about me too? Because now I’m starting to wonder if I’m missing out on some kind of hair revolution."
Y/N’s eyes went wide. Was he actually considering it? No, no. He had to be messing with her. She stared at the message for a second longer than she should have, trying to decide if this was a real response or if she had somehow misinterpreted the whole thing.
No. He had to be getting it. She wasn’t backing down now. She had created a monster out of her own wild, absurd imagination, and it was all going exactly as planned.
She quickly typed back, a little too eagerly, but who could blame her?
Y/N: “Imagine the vibe, Max. Imagine the wind in your hair as you race, that confidence flowing through you. Like a whole new level of fabulous.”
There. That was perfect. She leaned back in her chair and waited for a response.
But of course, Max—being Max—didn’t make things easy.
Minutes passed. No reply.
Was he thinking about it? Was he in deep contemplation about whether he’d look good with long hair? Y/N felt like she might explode. Come on, Max. You’ve got this. Just admit it.
She stared at her screen until the words blurred together.
And then, finally, a message came in.
Max: "Alright, alright, I’ll admit it. I’m curious now. But do you really think I’d look good with long hair? I mean, I can’t picture it."
Y/N stared at her phone in disbelief.
This was it. This was her moment. He was actually questioning it. She could already hear the victory music playing in her head.
She took a deep breath, trying not to sound too smug.
Y/N: “Max, I’m telling you, it’s a whole vibe. You might just become the most iconic man in Formula 1 with long hair. People would talk about you for centuries.”
She added a winking emoji for good measure.
Max: “Centuries? Okay, now you’re definitely messing with me.”
Y/N couldn’t help but laugh out loud. Oh, she was so close now.
After Max’s message came through, Y/N could barely contain herself. She wanted to scream, to do a victory dance, but instead, she opted for something slightly more composed: a dramatic flailing of her arms in the air and a loud, victorious "YES!" that echoed through her apartment like an over-the-top, one-woman celebration.
Max was actually considering it. He was at least open to the idea of long hair in the future.
This was the moment. She had won.
But the funniest part? Max wasn’t even aware of the scale of Y/N’s obsession. He was just playing along with her ridiculous game, unaware that she was about to go into full, borderline obsessive mode.
She stared at the text again, eyes wide, her heart racing. He was going to do it. One day—maybe not today, maybe not tomorrow—but Max Verstappen would, in fact, grow out his hair. He was practically promising it.
Y/N’s mind whirled with a thousand thoughts, each one more absurd than the last. She could already picture it: Max, standing on the racetrack, hair flying dramatically behind him as he sped past everyone. She could already hear the cheers. It was happening.
She grabbed her phone with shaking hands, barely able to type a coherent message. It was one thing for her to joke about it. But the fact that Max had actually said he’d grow his hair out one day? She couldn’t be the only one who was excited about this.
Y/N: “Max... no joke, I’m going to lose it the day you show up with long hair. I’ll probably start a fan club or something. A whole movement. ‘Max’s Hair, Max’s Way.’ How does that sound?”
She hit send and then immediately regretted it. It sounded insane. What was wrong with her?
She stared at the message for a long moment, debating whether she should delete it or just let it be. But before she could decide, Max’s reply came in like a gift from the hair gods themselves.
Max: “I don’t know about a whole movement, but hey, if I ever grow it out, you can be the president of the fan club. Just... don’t make it too weird, alright?”
Y/N almost dropped her phone. President? He was serious about this. She had an actual title in the most bizarre, ridiculous movement of her life.
Wait. Make it too weird? Oh, Max. She had already made it weird.
She texted back, too quickly, as if he could see her grinning like a maniac.
Y/N: “Deal. I’ll make sure to have the first fan club meeting at your next race. You better be ready for it.”
Max: “I’ll be sure to bring my best hair flip to the race. It’s going to be legendary.”
Y/N clutched her phone to her chest like she had just received the greatest treasure in the world. This was happening. It was happening in the future. She couldn’t wait. The anticipation was going to drive her insane.
But right now? She was going to enjoy the chaos of it all. She leaned back in her chair, hands trembling as she stared at the screen, imagining all the memes, the fan art, the movement. And who knew? Maybe one day, Max Verstappen would really grow out his hair.
Until then, Y/N was going to spend the next few weeks plotting the perfect fan club logo.
And so, Y/N’s obsession grew, her fantasies becoming wilder by the day. Every time she heard a hair-related joke or saw a picture of someone with long hair, she’d start giggling to herself like a schoolgirl with a crush. It wasn’t about Max’s hair anymore; it was about the ridiculous movement she had created, a movement that only she truly understood.
As for Max? He was still blissfully unaware of the full extent of Y/N’s hair dreams, but every now and then, he’d shoot her a quick text.
Max: “So... I’ve been thinking about it. Maybe I’ll start growing it out... one day. You ready to lead the fan club?”
And Y/N would reply with a heart full of excitement and a mind full of absurd possibilities.
Max’s Hair, Max’s Way. It was only a matter of time before the world caught on to the movement.
━─━────༺༻────━─━
taglist : @heluvsjappie @awritingtree @steamy-smokey @alex-wotton
126 notes · View notes
the-sunniest-angels · 7 days ago
Note
Your artworks looks like AI
To be honest I'm guessing this is a bot because I don't think my art is really a style that is mistakeable as AI. BUT just in case this is someone who genuinely doesn't know how to differentiate AI art versus human art, I'm gonna make a post on it rq!
One of the ways you can tell my art is not AI is because you can see all the individual strokes that I made. My style in particular makes this easier to distinguish than others because as an artist I really embrace this, while others prefer a very clean lineart and coloring process.
Here are some examples from mine:
Tumblr media
This is from one I made of Nico underwater. If you look at the water you can see all the places I drew each line. By contrast, zooming in on AI art doesn't show any brush strokes at all. Often, there's also a weird "fuzz" I've noticed? Like rather than a human artist who simply makes a, say, yellow banana, and if you zoom in you just see yellow, for an AI if you zoom in it weirdly looks like the AI is struggling to make every pixel yellow so each pixel is slightly different. That's what I think of as the art being slightly fuzzy.
I tried searching google for some AI art to use as examples of this but I'm currently in a different country for an internship and they're still getting my WiFi set up, so my connection isn't loading any of the Google images with enough clarity to be able to zoom in a bunch so I can show you. But it's something I've noticed for a lottt of AI art--and so this coupled with lack of brush strokes can be a sign of AI.
Another thing that, in my opinion, is a way to determine something is human-made is the shape of the canvas! In my experience, when I see AI art online, it tends to be a very similar canvas shape each time. I don't think most AI creations have the ability to be creative with canvas shape. Meanwhile, a human might choose to make their canvas super wide or long or whatever. Since I created each piece of my art individually for the purpose of eventually combining it all into a comic-ish thing, each canvas I made was very very wide which would have been unusual for an AI. Such as:
Tumblr media
From what I've seen, an AI would have created somewhat more even dimensions.
And finally, one of the dead giveaways for AI versus human art is simply what mistakes are made in the piece. Neither AI art nor human art is usually absolutely perfect, but the mistakes that an AI makes are not usually the same ones that a human makes! For example here, I didn't actually make lineart or sketches for the background because I had figured "eh, how hard is it to make a background like this?" However you can tell this didn't work out perfectly for me because my "sun" did not end up perfectly round hahaha. Look above Nico's head. It's like sort of lopsided. Getting a perfect circle without any sort of lineart or tool is very hard as an artist, at least for me! However an AI would not struggle with making a perfect circle. It would have been much cleaner. However, an AI would have probably struggled more with things like color and style consistency in the wings (there are a lot of feathers that could trip it up), body proportions, etc etc.
And, overall, these three things together are very consistent with everything I post. AI would struggle to recreate a style like this over and over again, and it also tends to struggle to make the same face over and over. I'm not sure if you've ever seen one of those videos where people ask AI to duplicate an image without making any changes, but it really cannot do it. For this reason it would have been difficult for an AI to make the same face so many different times for a consistent comic.
I realize this ask was most likely a bot tbh since I think my art is pretty obviously human, but as a hater of AI art, I will never turn down an opportunity to talk about ways to differentiate human versus AI art. I hope this was helpful to anyone who struggles with identifying things like this!
35 notes · View notes
hanleiacelebration · 1 year ago
Text
Tumblr media
Han/Leia Appreciation Week 2024
WE'RE BACK, BABY! "Wait, what's happening, wasn't this in August??" you might wonder. Based on your feedback, we decided to host this and (probably) future editions of Han/Leia Appreciation Week earlier in the year. July seemed like the better choice, given that it's a vacation period in both hemispheres!
This year we're also giving you the prompts over a month in advance, so you have plenty of time to plan and create!
Han/Leia Appreciation Week was an event originally hosted at @han-leia-solo between 2016-2019, but for the past three years, we've taken up the mantle here at @hanleiacelebration 😊
💖 How does Han/Leia Appreciation Week work?
The event will run from July 14th to July 20th, and there will be two different prompts each day that creators can fulfill with: fanfic, fanart, gifsets, graphics, fanvids, headcanons, crafts, playlists, rec lists. You’re encouraged to tag your posts with #hanleiaweek2024 so we can reblog them! After the week is over, we’ll share a masterlist with links to the works.
You can show your appreciation in many ways; however, please keep in mind that it has to be a creation of yours of some sort, e.g. don’t repost other people’s fanart, gifs, or unedited pictures. Rec lists should include a link to the original source both for fanfic and fanart (more on this after the cut).
🎆 The prompts
Sunday 7/14: Tradition / Ceremony
Monday 7/15: Braids & Bloodstripes (hair or clothing) / Home planet
Tuesday 7/16: AU / Canon divergence
Wednesday 7/17: Force / Belief
Thursday 7/18: Favorite scene / Favorite quote
Friday 7/19: Meeting / Escape
Saturday 7/20: Free day!
You can use only one of the daily prompts, combine both, reinterpret them, or skip the day if you can’t think of anything. If you’re not able to post on the same day for a prompt, you’re still encouraged to share it through the week—just don’t post works for a certain prompt before the day corresponding to that prompt.
💠 💠 💠
FAQs and Rules under the cut - please read!
💕 Can I post my work to another site and share the link on Tumblr?
Yes! This is a good option for people who might want to create explicit art that could be taken down on Tumblr, write a long fic or multichapter, or make videos or playlists.
💕 Does it have to be a new creation? Can I finish and post a WIP?
It has to be something that has never been posted anywhere else before, so finishing and sharing a WIP is okay! If it doesn’t fit any of the prompts, you can share it on Free Day.
💕 Is this event open to all ratings?
Yes! Just remember to use a “Read more” cut if you’re posting the whole work on Tumblr, and to add a note at the top if your work is rated Mature or Explicit, as well as if it has any major trigger warnings, so all folks can safely browse through the entries.
💕 Are there any length or quality requirements?
There’s no min. or max. length for fanfic or quality level for art, but please note that AI-generated works won’t be accepted. For gifsets, there’s a minimum of two gifs (that must be made by you!). For playlists, there’s a minimum of five songs. For rec lists of fic or fanart, there’s also a minimum of five recs. Some more questions you might have about rec lists:
- How do I share someone else’s art without posting a picture? You might post a thumbnail that crops a preview of the piece; if the piece has a title, you might use that; you might describe it; or you might say something like “this piece by [artist]”, and link to the source.
- What if I found a fanart on Google? Try to find the original source using reverse search image.
- What if I can’t still find it, can I just say “credit to the artist”? In that case, please just don’t share the piece.
- What if I know the artist but don’t have a link to the original source? Naming the artist and linking back to where you found it is okay, in that case.
💕 Can I write for canon/Legends and include other pairings?
All canons, time periods, headcanons and AUs are welcome, and you’re allowed to include side pairings, except for R*eylo. However, keep in mind that this is a Han/Leia appreciation week - at the risk of sounding repetitive, works should focus on appreciating Han and Leia’s relationship!
💕 What’s the time zone for the event?
Please don’t worry too much about time zones: when we say “day”, we always mean “whenever that day is for you in your part of the world”. IE: if it’s Monday for you, you can post your work for the Monday prompt.
💠 💠 💠
Do you have any other questions? Don’t hesitate to send us an ask or to message one of the mods: @lajulie24 @hanorganaas and @otterandterrier
We can’t wait to see what you all create!
115 notes · View notes
i-love-your-light · 2 years ago
Text
too many thoughts on the new hbomberguy video not to put them anywhere so:
with every app trying to turn into the clock app these days by feeding you endless short form content, *how many* pieces of misinformation does the average person consume day to day?? thinking a lot about how tons of people on social media go largely unquestioned about the information they provide just because they speak confidently into the camera. if you're scrolling through hundreds of pieces of content a day, how many are you realistically going to have the time and will to check? i think there's an unfortunate subconscious bias in liberal and leftist spaces that misinformation is something that is done only by the right, but it's a bipartisan issue babey. everybody's got their own agendas, even if they're on "your side". *insert you are not immune to propaganda garfield meme*
and speaking of fact checking, can't help but think about how much the current state of search engines Sucks So Bad right now. not that this excuses ANY of the misinformation at all, but i think it provides further context as to why these things become so prevalent in creators who become quick-turnaround-content-farms and cut corners when it comes to researching. when i was in high school and learning how to research and cite sources, google was a whole different landscape that was relatively easy to navigate. nowadays a search might give you an ad, a fake news article, somebody's random blog, a quora question, and another ad before actually giving you a relevant verifiable source. i was googling a question about 1920s technology the other day (for a fanfiction im writing lmao) and the VERY FIRST RESULT google gave me was some random fifth grader's school assignment on the topic???? like?????? WHAT????? it just makes it even harder for people to fact-check misinformation too.
going off the point of cutting corners when it comes to creating content, i can't help but think about capitalism's looming influence over all of this too. again, not as an excuse at all but just as further environmental context (because i really believe the takeaway shouldn't be "wow look how bad this one individual guy is" but rather "wow this is one specific example of a much larger systemic issue that is more pervasive than we realize"). a natural consequence of the inhumanity of capitalism is that people feel as if they have to step on or over eachother to get to 'the top'. if everybody is on this individualistic american dream race to success, everyone else around you just looks like collateral. of course then you're going to take shortcuts, and you're going to swindle labor and intellectual property from others, because your primary motivation is accruing capital (financial or social) over ethics or actual labor.
i've been thinking about this in relation to AI as well, and the notion that some people want to Be Artists without Doing Art. they want to Have Done Art but not labor through the process. to present something shiny to the world and benefit off of it. they don't want to go through the actual process of creating, they just want a product. Easy money. Winning the game of capitalism.
i can't even fully fault this mentality- as someone who has been struggling making barely minimum wage from art in one of the most expensive cities in america for the past two years, i can't say that i haven't been tempted on really difficult occasions to act in ways that would be morally bad but would give me a reprieve from the constant stress cycle of "how am i going to pay for my own survival for another month". the difference is i don't give in to those impulses.
tl;dr i hope that people realize that instead of this just being a time to dogpile on one guy (or a few people), that it's actually about a larger systemic problem, and the perfect breeding grounds society has created for this kind of behavior to largely go unchecked!!!
238 notes · View notes
mostlysignssomeportents · 1 year ago
Text
Copyright takedowns are a cautionary tale that few are heeding
Tumblr media
On July 14, I'm giving the closing keynote for the fifteenth HACKERS ON PLANET EARTH, in QUEENS, NY. Happy Bastille Day! On July 20, I'm appearing in CHICAGO at Exile in Bookville.
Tumblr media
We're living through one of those moments when millions of people become suddenly and overwhelmingly interested in fair use, one of the subtlest and worst-understood aspects of copyright law. It's not a subject you can master by skimming a Wikipedia article!
I've been talking about fair use with laypeople for more than 20 years. I've met so many people who possess the unshakable, serene confidence of the truly wrong, like the people who think fair use means you can take x words from a book, or y seconds from a song and it will always be fair, while anything more will never be.
Or the people who think that if you violate any of the four factors, your use can't be fair – or the people who think that if you fail all of the four factors, you must be infringing (people, the Supreme Court is calling and they want to tell you about the Betamax!).
You might think that you can never quote a song lyric in a book without infringing copyright, or that you must clear every musical sample. You might be rock solid certain that scraping the web to train an AI is infringing. If you hold those beliefs, you do not understand the "fact intensive" nature of fair use.
But you can learn! It's actually a really cool and interesting and gnarly subject, and it's a favorite of copyright scholars, who have really fascinating disagreements and discussions about the subject. These discussions often key off of the controversies of the moment, but inevitably they implicate earlier fights about everything from the piano roll to 2 Live Crew to antiracist retellings of Gone With the Wind.
One of the most interesting discussions of fair use you can ask for took place in 2019, when the NYU Engelberg Center on Innovation Law & Policy held a symposium called "Proving IP." One of the panels featured dueling musicologists debating the merits of the Blurred Lines case. That case marked a turning point in music copyright, with the Marvin Gaye estate successfully suing Robin Thicke and Pharrell Williams for copying the "vibe" of Gaye's "Got to Give it Up."
Naturally, this discussion featured clips from both songs as the experts – joined by some of America's top copyright scholars – delved into the legal reasoning and future consequences of the case. It would be literally impossible to discuss this case without those clips.
And that's where the problems start: as soon as the symposium was uploaded to Youtube, it was flagged and removed by Content ID, Google's $100,000,000 copyright enforcement system. This initial takedown was fully automated, which is how Content ID works: rightsholders upload audio to claim it, and then Content ID removes other videos where that audio appears (rightsholders can also specify that videos with matching clips be demonetized, or that the ad revenue from those videos be diverted to the rightsholders).
But Content ID has a safety valve: an uploader whose video has been incorrectly flagged can challenge the takedown. The case is then punted to the rightsholder, who has to manually renew or drop their claim. In the case of this symposium, the rightsholder was Universal Music Group, the largest record company in the world. UMG's personnel reviewed the video and did not drop the claim.
99.99% of the time, that's where the story would end, for many reasons. First of all, most people don't understand fair use well enough to contest the judgment of a cosmically vast, unimaginably rich monopolist who wants to censor their video. Just as importantly, though, is that Content ID is a Byzantine system that is nearly as complex as fair use, but it's an entirely private affair, created and adjudicated by another galactic-scale monopolist (Google).
Google's copyright enforcement system is a cod-legal regime with all the downsides of the law, and a few wrinkles of its own (for example, it's a system without lawyers – just corporate experts doing battle with laypeople). And a single mis-step can result in your video being deleted or your account being permanently deleted, along with every video you've ever posted. For people who make their living on audiovisual content, losing your Youtube account is an extinction-level event:
https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online
So for the average Youtuber, Content ID is a kind of Kafka-as-a-Service system that is always avoided and never investigated. But the Engelbert Center isn't your average Youtuber: they boast some of the country's top copyright experts, specializing in exactly the questions Youtube's Content ID is supposed to be adjudicating.
So naturally, they challenged the takedown – only to have UMG double down. This is par for the course with UMG: they are infamous for refusing to consider fair use in takedown requests. Their stance is so unreasonable that a court actually found them guilty of violating the DMCA's provision against fraudulent takedowns:
https://www.eff.org/cases/lenz-v-universal
But the DMCA's takedown system is part of the real law, while Content ID is a fake law, created and overseen by a tech monopolist, not a court. So the fate of the Blurred Lines discussion turned on the Engelberg Center's ability to navigate both the law and the n-dimensional topology of Content ID's takedown flowchart.
It took more than a year, but eventually, Engelberg prevailed.
Until they didn't.
If Content ID was a person, it would be baby, specifically, a baby under 18 months old – that is, before the development of "object permanence." Until our 18th month (or so), we lack the ability to reason about things we can't see – this the period when small babies find peek-a-boo amazing. Object permanence is the ability to understand things that aren't in your immediate field of vision.
Content ID has no object permanence. Despite the fact that the Engelberg Blurred Lines panel was the most involved fair use question the system was ever called upon to parse, it managed to repeatedly forget that it had decided that the panel could stay up. Over and over since that initial determination, Content ID has taken down the video of the panel, forcing Engelberg to go through the whole process again.
But that's just for starters, because Youtube isn't the only place where a copyright enforcement bot is making billions of unsupervised, unaccountable decisions about what audiovisual material you're allowed to access.
Spotify is yet another monopolist, with a justifiable reputation for being extremely hostile to artists' interests, thanks in large part to the role that UMG and the other major record labels played in designing its business rules:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
Spotify has spent hundreds of millions of dollars trying to capture the podcasting market, in the hopes of converting one of the last truly open digital publishing systems into a product under its control:
https://pluralistic.net/2023/01/27/enshittification-resistance/#ummauerter-garten-nein
Thankfully, that campaign has failed – but millions of people have (unwisely) ditched their open podcatchers in favor of Spotify's pre-enshittified app, so everyone with a podcast now must target Spotify for distribution if they hope to reach those captive users.
Guess who has a podcast? The Engelberg Center.
Naturally, Engelberg's podcast includes the audio of that Blurred Lines panel, and that audio includes samples from both "Blurred Lines" and "Got To Give It Up."
So – naturally – UMG keeps taking down the podcast.
Spotify has its own answer to Content ID, and incredibly, it's even worse and harder to navigate than Google's pretend legal system. As Engelberg describes in its latest post, UMG and Spotify have colluded to ensure that this now-classic discussion of fair use will never be able to take advantage of fair use itself:
https://www.nyuengelberg.org/news/how-explaining-copyright-broke-the-spotify-copyright-system/
Remember, this is the best case scenario for arguing about fair use with a monopolist like UMG, Google, or Spotify. As Engelberg puts it:
The Engelberg Center had an extraordinarily high level of interest in pursuing this issue, and legal confidence in our position that would have cost an average podcaster tens of thousands of dollars to develop. That cannot be what is required to challenge the removal of a podcast episode.
Automated takedown systems are the tech industry's answer to the "notice-and-takedown" system that was invented to broker a peace between copyright law and the internet, starting with the US's 1998 Digital Millennium Copyright Act. The DMCA implements (and exceeds) a pair of 1996 UN treaties, the WIPO Copyright Treaty and the Performances and Phonograms Treaty, and most countries in the world have some version of notice-and-takedown.
Big corporate rightsholders claim that notice-and-takedown is a gift to the tech sector, one that allows tech companies to get away with copyright infringement. They want a "strict liability" regime, where any platform that allows a user to post something infringing is liable for that infringement, to the tune of $150,000 in statutory damages.
Of course, there's no way for a platform to know a priori whether something a user posts infringes on someone's copyright. There is no registry of everything that is copyrighted, and of course, fair use means that there are lots of ways to legally reproduce someone's work without their permission (or even when they object). Even if every person who ever has trained or ever will train as a copyright lawyer worked 24/7 for just one online platform to evaluate every tweet, video, audio clip and image for copyright infringement, they wouldn't be able to touch even 1% of what gets posted to that platform.
The "compromise" that the entertainment industry wants is automated takedown – a system like Content ID, where rightsholders register their copyrights and platforms block anything that matches the registry. This "filternet" proposal became law in the EU in 2019 with Article 17 of the Digital Single Market Directive:
https://www.eff.org/deeplinks/2018/09/today-europe-lost-internet-now-we-fight-back
This was the most controversial directive in EU history, and – as experts warned at the time – there is no way to implement it without violating the GDPR, Europe's privacy law, so now it's stuck in limbo:
https://www.eff.org/deeplinks/2022/05/eus-copyright-directive-still-about-filters-eus-top-court-limits-its-use
As critics pointed out during the EU debate, there are so many problems with filternets. For one thing, these copyright filters are very expensive: remember that Google has spent $100m on Content ID alone, and that only does a fraction of what filternet advocates demand. Building the filternet would cost so much that only the biggest tech monopolists could afford it, which is to say, filternets are a legal requirement to keep the tech monopolists in business and prevent smaller, better platforms from ever coming into existence.
Filternets are also incapable of telling the difference between similar files. This is especially problematic for classical musicians, who routinely find their work blocked or demonetized by Sony Music, which claims performances of all the most important classical music compositions:
https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music
Content ID can't tell the difference between your performance of "The Goldberg Variations" and Glenn Gould's. For classical musicians, the best case scenario is to have their online wages stolen by Sony, who fraudulently claim copyright to their recordings. The worst case scenario is that their video is blocked, their channel deleted, and their names blacklisted from ever opening another account on one of the monopoly platforms.
But when it comes to free expression, the role that notice-and-takedown and filternets play in the creative industries is really a sideshow. In creating a system of no-evidence-required takedowns, with no real consequences for fraudulent takedowns, these systems are huge gift to the world's worst criminals. For example, "reputation management" companies help convicted rapists, murderers, and even war criminals purge the internet of true accounts of their crimes by claiming copyright over them:
https://pluralistic.net/2021/04/23/reputation-laundry/#dark-ops
Remember how during the covid lockdowns, scumbags marketed junk devices by claiming that they'd protect you from the virus? Their products remained online, while the detailed scientific articles warning people about the fraud were speedily removed through false copyright claims:
https://pluralistic.net/2021/10/18/labor-shortage-discourse-time/#copyfraud
Copyfraud – making false copyright claims – is an extremely safe crime to commit, and it's not just quack covid remedy peddlers and war criminals who avail themselves of it. Tech giants like Adobe do not hesitate to abuse the takedown system, even when that means exposing millions of people to spyware:
https://pluralistic.net/2021/10/13/theres-an-app-for-that/#gnash
Dirty cops play loud, copyrighted music during confrontations with the public, in the hopes that this will trigger copyright filters on services like Youtube and Instagram and block videos of their misbehavior:
https://pluralistic.net/2021/02/10/duke-sucks/#bhpd
But even if you solved all these problems with filternets and takedown, this system would still choke on fair use and other copyright exceptions. These are "fact intensive" questions that the world's top experts struggle with (as anyone who watches the Blurred Lines panel can see). There's no way we can get software to accurately determine when a use is or isn't fair.
That's a question that the entertainment industry itself is increasingly conflicted about. The Blurred Lines judgment opened the floodgates to a new kind of copyright troll – grifters who sued the record labels and their biggest stars for taking the "vibe" of songs that no one ever heard of. Musicians like Ed Sheeran have been sued for millions of dollars over these alleged infringements. These suits caused the record industry to (ahem) change its tune on fair use, insisting that fair use should be broadly interpreted to protect people who made things that were similar to existing works. The labels understood that if "vibe rights" became accepted law, they'd end up in the kind of hell that the rest of us enter when we try to post things online – where anything they produce can trigger takedowns, long legal battles, and millions in liability:
https://pluralistic.net/2022/04/08/oh-why/#two-notes-and-running
But the music industry remains deeply conflicted over fair use. Take the curious case of Katy Perry's song "Dark Horse," which attracted a multimillion-dollar suit from an obscure Christian rapper who claimed that a brief phrase in "Dark Horse" was impermissibly similar to his song "A Joyful Noise."
Perry and her publisher, Warner Chappell, lost the suit and were ordered to pay $2.8m. While they subsequently won an appeal, this definitely put the cold grue up Warner Chappell's back. They could see a long future of similar suits launched by treasure hunters hoping for a quick settlement.
But here's where it gets unbelievably weird and darkly funny. A Youtuber named Adam Neely made a wildly successful viral video about the suit, taking Perry's side and defending her song. As part of that video, Neely included a few seconds' worth of "A Joyful Noise," the song that Perry was accused of copying.
In court, Warner Chappell had argued that "A Joyful Noise" was not similar to Perry's "Dark Horse." But when Warner had Google remove Neely's video, they claimed that the sample from "Joyful Noise" was actually taken from "Dark Horse." Incredibly, they maintained this position through multiple appeals through the Content ID system:
https://pluralistic.net/2020/03/05/warner-chappell-copyfraud/#warnerchappell
In other words, they maintained that the song that they'd told the court was totally dissimilar to their own was so indistinguishable from their own song that they couldn't tell the difference!
Now, this question of vibes, similarity and fair use has only gotten more intense since the takedown of Neely's video. Just this week, the RIAA sued several AI companies, claiming that the songs the AI shits out are infringingly similar to tracks in their catalog:
https://www.rollingstone.com/music/music-news/record-labels-sue-music-generators-suno-and-udio-1235042056/
Even before "Blurred Lines," this was a difficult fair use question to answer, with lots of chewy nuances. Just ask George Harrison:
https://en.wikipedia.org/wiki/My_Sweet_Lord
But as the Engelberg panel's cohort of dueling musicologists and renowned copyright experts proved, this question only gets harder as time goes by. If you listen to that panel (if you can listen to that panel), you'll be hard pressed to come away with any certainty about the questions in this latest lawsuit.
The notice-and-takedown system is what's known as an "intermediary liability" rule. Platforms are "intermediaries" in that they connect end users with each other and with businesses. Ebay and Etsy and Amazon connect buyers and sellers; Facebook and Google and Tiktok connect performers, advertisers and publishers with audiences and so on.
For copyright, notice-and-takedown gives platforms a "safe harbor." A platform doesn't have to remove material after an allegation of infringement, but if they don't, they're jointly liable for any future judgment. In other words, Youtube isn't required to take down the Engelberg Blurred Lines panel, but if UMG sues Engelberg and wins a judgment, Google will also have to pay out.
During the adoption of the 1996 WIPO treaties and the 1998 US DMCA, this safe harbor rule was characterized as a balance between the rights of the public to publish online and the interest of rightsholders whose material might be infringed upon. The idea was that things that were likely to be infringing would be immediately removed once the platform received a notification, but that platforms would ignore spurious or obviously fraudulent takedowns.
That's not how it worked out. Whether it's Sony Music claiming to own your performance of "Fur Elise" or a war criminal claiming authorship over a newspaper story about his crimes, platforms nuke first and ask questions never. Why not? If they ignore a takedown and get it wrong, they suffer dire consequences ($150,000 per claim). But if they take action on a dodgy claim, there are no consequences. Of course they're just going to delete anything they're asked to delete.
This is how platforms always handle liability, and that's a lesson that we really should have internalized by now. After all, the DMCA is the second-most famous intermediary liability system for the internet – the most (in)famous is Section 230 of the Communications Decency Act.
This is a 27-word law that says that platforms are not liable for civil damages arising from their users' speech. Now, this is a US law, and in the US, there aren't many civil damages from speech to begin with. The First Amendment makes it very hard to get a libel judgment, and even when these judgments are secured, damages are typically limited to "actual damages" – generally a low sum. Most of the worst online speech is actually not illegal: hate speech, misinformation and disinformation are all covered by the First Amendment.
Notwithstanding the First Amendment, there are categories of speech that US law criminalizes: actual threats of violence, criminal harassment, and committing certain kinds of legal, medical, election or financial fraud. These are all exempted from Section 230, which only provides immunity for civil suits, not criminal acts.
What Section 230 really protects platforms from is being named to unwinnable nuisance suits by unscrupulous parties who are betting that the platforms would rather remove legal speech that they object to than go to court. A generation of copyfraudsters have proved that this is a very safe bet:
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
In other words, if you made a #MeToo accusation, or if you were a gig worker using an online forum to organize a union, or if you were blowing the whistle on your employer's toxic waste leaks, or if you were any other under-resourced person being bullied by a wealthy, powerful person or organization, that organization could shut you up by threatening to sue the platform that hosted your speech. The platform would immediately cave. But those same rich and powerful people would have access to the lawyers and back-channels that would prevent you from doing the same to them – that's why Sony can get your Brahms recital taken down, but you can't turn around and do the same to them.
This is true of every intermediary liability system, and it's been true since the earliest days of the internet, and it keeps getting proven to be true. Six years ago, Trump signed SESTA/FOSTA, a law that allowed platforms to be held civilly liable by survivors of sex trafficking. At the time, advocates claimed that this would only affect "sexual slavery" and would not impact consensual sex-work.
But from the start, and ever since, SESTA/FOSTA has primarily targeted consensual sex-work, to the immediate, lasting, and profound detriment of sex workers:
https://hackinghustling.org/what-is-sesta-fosta/
SESTA/FOSTA killed the "bad date" forums where sex workers circulated the details of violent and unstable clients, killed the online booking sites that allowed sex workers to screen their clients, and killed the payment processors that let sex workers avoid holding unsafe amounts of cash:
https://www.eff.org/deeplinks/2022/09/fight-overturn-fosta-unconstitutional-internet-censorship-law-continues
SESTA/FOSTA made voluntary sex work more dangerous – and also made life harder for law enforcement efforts to target sex trafficking:
https://hackinghustling.org/erased-the-impact-of-fosta-sesta-2020/
Despite half a decade of SESTA/FOSTA, despite 15 years of filternets, despite a quarter century of notice-and-takedown, people continue to insist that getting rid of safe harbors will punish Big Tech and make life better for everyday internet users.
As of now, it seems likely that Section 230 will be dead by then end of 2025, even if there is nothing in place to replace it:
https://energycommerce.house.gov/posts/bipartisan-energy-and-commerce-leaders-announce-legislative-hearing-on-sunsetting-section-230
This isn't the win that some people think it is. By making platforms responsible for screening the content their users post, we create a system that only the largest tech monopolies can survive, and only then by removing or blocking anything that threatens or displeases the wealthy and powerful.
Filternets are not precision-guided takedown machines; they're indiscriminate cluster-bombs that destroy anything in the vicinity of illegal speech – including (and especially) the best-informed, most informative discussions of how these systems go wrong, and how that blocks the complaints of the powerless, the marginalized, and the abused.
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never
Tumblr media
Image: EFF https://www.eff.org/files/banner_library/yt-fu-1b.png
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
677 notes · View notes
probablyasocialecologist · 10 months ago
Text
This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager. But come on. The jig is up. The technology that was at this time last year being somberly touted as so powerful that it posed an existential threat to humanity is now worrying investors because it is apparently incapable of generating passable marketing emails reliably enough. We’ve had at least a year of companies shelling out for business-grade generative AI, and the results—painted as shinily as possible from a banking and investment sector that would love nothing more than a new technology that can automate office work and creative labor—are one big “meh.” As a Bloomberg story put it last week, “Big Tech Fails to Convince Wall Street That AI Is Paying Off.” From the piece: Amazon.com Inc., Microsoft Corp. and Alphabet Inc. had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales. In the eyes of Wall Street, they disappointed. Shares in Google owner Alphabet have fallen 7.4% since it reported last week. Microsoft’s stock price has declined in the three days since the company’s own results. Shares of Amazon — the latest to drop its earnings on Thursday — plunged by the most since October 2022 on Friday. Silicon Valley hailed 2024 as the year that companies would begin to deploy generative AI, the type of technology that can create text, images and videos from simple prompts. This mass adoption is meant to finally bring about meaningful profits from the likes of Google’s Gemini and Microsoft’s Copilot. The fact that those returns have yet to meaningfully materialize is stoking broader concerns about how worthwhile AI will really prove to be. Meanwhile, Nvidia, the AI chipmaker that soared to an absurd $3 trillion valuation, is losing that value with every passing day—26% over the last month or so, and some analysts believe that’s just the beginning. These declines are the result of less-than-stellar early results from corporations who’ve embraced enterprise-tier generative AI, the distinct lack of killer commercial products 18 months into the AI boom, and scathing financial analyses from Goldman Sachs, Sequoia Capital, and Elliot Management, each of whom concluded that there was “too much spend, too little benefit” from generative AI, in the words of Goldman, and that it was “overhyped” and a “bubble” per Elliot. As CNN put it in its report on growing fears of an AI bubble, Some investors had even anticipated that this would be the quarter that tech giants would start to signal that they were backing off their AI infrastructure investments since “AI is not delivering the returns that they were expecting,” D.A. Davidson analyst Gil Luria told CNN. The opposite happened — Google, Microsoft and Meta all signaled that they plan to spend even more as they lay the groundwork for what they hope is an AI future. This can, perhaps, explain some of the investor revolt. The tech giants have responded to mounting concerns by doubling, even tripling down, and planning on spending tens of billions of dollars on researching, developing, and deploying generative AI for the foreseeable future. All this as high profile clients are canceling their contracts. As surveys show that overwhelming majorities of workers say generative AI makes them less productive. As MIT economist and automation scholar Daron Acemoglu warns, “Don’t believe the AI hype.”
6 August 2024
182 notes · View notes
reasonsforhope · 1 year ago
Text
"Major technology companies signed a pact on Friday to voluntarily adopt "reasonable precautions" to prevent artificial intelligence (AI) tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. 
Twelve other companies - including Elon Musk's X - are also signing on to the accord...
The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio, and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote".
The companies aren't committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. 
It notes the companies will share best practices and provide "swift and proportionate responses" when that content starts to spread.
Lack of binding requirements
The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.
"The language isn't quite as strong as one might have expected," said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. 
"I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be keeping an eye on whether they follow through." ...
Several political leaders from Europe and the US also joined Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, "it contains very impactful and positive elements".  ...
[The Accord and Where We're At]
The accord calls on platforms to "pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression".
It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.
Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.
That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law [in the US], but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.
Many social media companies already have policies in place to deter deceptive posts about electoral processes - AI-generated or not... 
[Signatories Include]
In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn't immediately respond to a request for comment on Friday.
The inclusion of X - not mentioned in an earlier announcement about the pending accord - was one of the surprises of Friday's agreement."
-via EuroNews, February 17, 2024
--
Note: No idea whether this will actually do much of anything (would love to hear from people with experience in this area on significant this is), but I'll definitely take it. Some of these companies may even mean it! (X/Twitter almost definitely doesn't, though).
Still, like I said, I'll take it. Any significant move toward tech companies self-regulating AI is a good sign, as far as I'm concerned, especially a large-scale and international effort. Even if it's a "mostly symbolic" accord, the scale and prominence of this accord is encouraging, and it sets a precedent for further regulation to build on.
148 notes · View notes