#Creative Thinking Training Program
Explore tagged Tumblr posts
Text
Unleashing Potential through Corporate Training in Mumbai with Atlas Learning
The leadership programs are designed to focus on building key traits such as emotional intelligence, decision-making, conflict resolution, and team management. These traits are essential for any leader looking to drive their team to success. The Best Leadership Training provided by Atlas Learning is customized to fit the unique challenges faced by different industries, ensuring that leaders are not only prepared for today’s challenges but also equipped to lead in the future. Read more at: https://shorturl.at/qCYJP
#Corporate Training in Mumbai#Best Leadership Training#Creative Thinking Training Program#box training
0 notes
Note
You’ve probably been asked this before, but do you have a specific view on ai-generated art. I’m doing a school project on artificial intelligence and if it’s okay, i would like to cite you
I mean, you're welcome to cite me if you like. I recently wrote a post under a reblog about AI, and I did a video about it a while back, before the full scale of AI hype had really started rolling over the Internet - I don't 100% agree with all my arguments from that video anymore, but you can cite it if you please.
In short, I think generative AI art
Is art, real art, and it's silly to argue otherwise, the question is what KIND of art it is and what that art DOES in the world. Generally, it is boring and bland art which makes the world a more stressful, unpleasant and miserable place to be.
AI generated art is structurally and inherently limited by its nature. It is by necessity averages generated from data-sets, and so it inherits EVERY bias of its training data and EVERY bias of its training data validators and creators. It naturally tends towards the lowest common denominator in all areas, and it is structurally biased towards reinforcing and reaffirming the status quo of everything it is turned to.
It tends to be all surface, no substance. As in, it carries the superficial aesthetic of very high-quality rendering, but only insofar as it reproduces whatever signifiers of "quality" are most prized in its weighted training data. It cannot understand the structures and principles of what it is creating. Ask it for a horse and it does not know what a "horse" is, all it knows is what parts of it training data are tagged as "horse" and which general data patterns are likely to lead an observer to identify its output also as "horse." People sometimes describe this limitation as "a lack of soul" but it's perhaps more useful to think of it as a lack of comprehension.
Due to this lack of comprehension, AI art cannot communicate anything - or rather, the output tends to attempt to communicate everything, at random, all at once, and it's the visual equivalent of a kind of white noise. It lacks focus.
Human operators of AI generative tools can imbue communicative meaning into the outputs, and whip the models towards some sort of focus, because humans can do that with literally anything they turn their directed attention towards. Human beings can make art with paint spatters and bits of gum stuck under tennis shoes, of course a dedicated human putting tons of time into a process of trial and error can produce something meaningful with genAI tools.
The nature of genAI as a tool of creation is uniquely limited and uniquely constrained, a genAI tool can only ever output some mixture of whatever is in its training data (and what's in its training data is biased by the data that its creators valued enough to include), and it can only ever output that mixture according to the weights and biases of its programming and data set, which is fully within the control of whoever created the tool in the first place. Consequently, genAI is a tool whose full creative capacity is always, always, always going to be owned by corporations, the only entities with the resources and capacity to produce the most powerful models. And those models, thus, will always only create according to corporate interest. An individual human can use a pencil to draw whatever the hell they want, but an individual human can never use Midjourney to create anything except that which Midjourney allows them to create. GenAI art is thus limited not only by its mathematical tendency to bias the lowest common denominator, but also by an ideological bias inherited from whoever holds the leash on its creation. The necessary decision of which data gets included in a training set vs which data gets left out will, always and forever, impose de facto censorship on what a model is capable of expressing, and the power to make that decision is never in the hands of the artist attempting to use the tool.
tl;dr genAI art has a tendency to produce ideologically limited and intrinsically censored outputs, while defaulting to lowest common denominators that reproduce and reinforce status quos.
... on top of which its promulgation is an explicit plot by oligarchic industry to drive millions of people deeper into poverty and collapse wages in order to further concentrate wealth in the hands of the 0.01%. But that's just a bonus reason to dislike it.
2K notes
·
View notes
Text
Anon's explanation:
I’m curious because I see a lot of people claiming to be anti-AI, and in the same post advocating for the use of Glaze and Artshield, which use DiffusionBee and Stable Diffusion, respectively. Glaze creates a noise filter using DiffusionBee; Artshield runs your image through Stable Diffusion and edits it so that it reads as AI-generated. You don’t have to take my work for it. Search for DiffusionBee and Glaze yourself if you have doubts. I’m also curious about machine translation, since Google Translate is trained on the same kinds of data as ChatGPT (social media, etc) and translation work is also skilled creative labor, but people seem to have no qualms about using it. The same goes for text to speech—a lot of the voices people use for it were trained on professional audiobook narration, and voice acting/narration is also skilled creative labor. Basically, I’m curious because people seem to regard these types of gen AI differently than text gen and image gen. Is it because they don’t know? Is it because they don’t think the work it replaces is creative? Is it because of accessibility? (and, if so, why are other types of gen AI not also regarded as accessibility? And even then, it wouldn’t explain the use of Glaze/Artshield)
Additional comments from anon:
I did some digging by infiltrating (lurking in) pro-AI spaces to see how much damage Glaze and other such programs were doing. Unfortunately, it turns out none of those programs deter people from using the ‘protected’ art. In fact, because of how AI training works, they may actually result in better output? Something about adversarial training. It was super disappointing. Nobody in those spaces considers them even a mild deterrent anywhere I looked. Hopefully people can shed some light on the contradictions for me. Even just knowing how widespread their use is would be informative. (I’m not asking about environmental impact as a factor because I read the study everybody cited, and it wasn’t even anti-AI? It was about figuring out the best time of day to train a model to balance solar power vs water use and consumption. And the way they estimated the impact of AI was super weird? They just went with 2020’s data center growth rate as the ‘normal’ growth rate and then any ‘extra’ growth was considered AI. Maybe that’s why it didn’t pass peer review... But since people are still quoting it, that’s another reason for me to wonder why they would use Glaze and Artshield and everything. That’s why running them locally has such heavy GPU requirements and why it takes so long to process an image if you don’t meet the requirements. It’s the same electricity/water cost as generating any other AI image.)
–
We ask your questions anonymously so you don’t have to! Submissions are open on the 1st and 15th of the month.
#polls#incognito polls#anonymous#tumblr polls#tumblr users#questions#polls about ethics#submitted april 15#polls about the internet#ai#gen ai#generative ai#ai tools#technology
329 notes
·
View notes
Text
the scale of AI's ecological footprint
standalone version of my response to the following:
"you need soulless art? [...] why should you get to use all that computing power and electricity to produce some shitty AI art? i don’t actually think you’re entitled to consume those resources." "i think we all deserve nice things. [...] AI art is not a nice thing. it doesn’t meaningfully contribute to us thriving and the cost in terms of energy use [...] is too fucking much. none of us can afford to foot the bill." "go watch some tv show or consume some art that already exists. […] you know what’s more environmentally and economically sustainable […]? museums. galleries. being in nature."
you can run free and open source AI art programs on your personal computer, with no internet connection. this doesn't require much more electricity than running a resource-intensive video game on that same computer. i think it's important to consume less. but if you make these arguments about AI, do you apply them to video games too? do you tell Fortnite players to play board games and go to museums instead?
speaking of museums: if you drive 3 miles total to a museum and back home, you have consumed more energy and created more pollution than generating AI images for 24 hours straight (this comes out to roughly 1400 AI images). "being in nature" also involves at least this much driving, usually. i don't think these are more environmentally-conscious alternatives.
obviously, an AI image model costs energy to train in the first place, but take Stable Diffusion v2 as an example: it took 40,000 to 60,000 kWh to train. let's go with the upper bound. if you assume ~125g of CO2 per kWh, that's ~7.5 tons of CO2. to put this into perspective, a single person driving a single car for 12 months emits 4.6 tons of CO2. meanwhile, for example, the creation of a high-budget movie emits 2840 tons of CO2.
is the carbon cost of a single car being driven for 20 months, or 1/378th of a Marvel movie, worth letting anyone with a mid-end computer, anywhere, run free offline software that consumes a gaming session's worth of electricity to produce hundreds of images? i would say yes. in a heartbeat.
even if you see creating AI images as "less soulful" than consuming Marvel/Fortnite content, it's undeniably "more useful" to humanity as a tool. not to mention this usefulness includes reducing the footprint of creating media. AI is more environment-friendly than human labor on digital creative tasks, since it can get a task done with much less computer usage, doesn't commute to work, and doesn't eat.
and speaking of eating, another comparison: if you made an AI image program generate images non-stop for every second of every day for an entire year, you could offset your carbon footprint by… eating 30% less beef and lamb. not pork. not even meat in general. just beef and lamb.
the tech industry is guilty of plenty of horrendous stuff. but when it comes to the individual impact of AI, saying "i don’t actually think you’re entitled to consume those resources. do you need this? is this making you thrive?" to an individual running an AI program for 45 minutes a day per month is equivalent to questioning whether that person is entitled to a single 3 mile car drive once per month or a single meatball's worth of beef once per month. because all of these have the same CO2 footprint.
so yeah. i agree, i think we should drive less, eat less beef, stream less video, consume less. but i don't think we should tell people "stop using AI programs, just watch a TV show, go to a museum, go hiking, etc", for the same reason i wouldn't tell someone "stop playing video games and play board games instead". i don't think this is a productive angle.
(sources and number-crunching under the cut.)
good general resource: GiovanH's article "Is AI eating all the energy?", which highlights the negligible costs of running an AI program, the moderate costs of creating an AI model, and the actual indefensible energy waste coming from specific companies deploying AI irresponsibly.
CO2 emissions from running AI art programs: a) one AI image takes 3 Wh of electricity. b) one AI image takes 1mn in, for example, Midjourney. c) so if you create 1 AI image per minute for 24 hours straight, or for 45 minutes per day for a month, you've consumed 4.3 kWh. d) using the UK electric grid through 2024 as an example, the production of 1 kWh releases 124g of CO2. therefore the production of 4.3 kWh releases 533g (~0.5 kg) of CO2.
CO2 emissions from driving your car: cars in the EU emit 106.4g of CO2 per km. that's 171.19g for 1 mile, or 513g (~0.5 kg) for 3 miles.
costs of training the Stable Diffusion v2 model: quoting GiovanH's article linked in 1. "Generative models go through the same process of training. The Stable Diffusion v2 model was trained on A100 PCIe 40 GB cards running for a combined 200,000 hours, which is a specialized AI GPU that can pull a maximum of 300 W. 300 W for 200,000 hours gives a total energy consumption of 60,000 kWh. This is a high bound that assumes full usage of every chip for the entire period; SD2’s own carbon emission report indicates it likely used significantly less power than this, and other research has shown it can be done for less." at 124g of CO2 per kWh, this comes out to 7440 kg.
CO2 emissions from red meat: a) carbon footprint of eating plenty of red meat, some red meat, only white meat, no meat, and no animal products the difference between a beef/lamb diet and a no-beef-or-lamb diet comes down to 600 kg of CO2 per year. b) Americans consume 42g of beef per day. this doesn't really account for lamb (egads! my math is ruined!) but that's about 1.2 kg per month or 15 kg per year. that single piece of 42g has a 1.65kg CO2 footprint. so our 3 mile drive/4.3 kWh of AI usage have the same carbon footprint as a 12g piece of beef. roughly the size of a meatball [citation needed].
553 notes
·
View notes
Text
Ok so She-Ra pulled such a great hat trick with Hordak's characterization, and I LOVE it
One of my favorite things about 2018 She-Ra is Hordak's story and development (and Entrapdak cough but that's not the point of this particular post), and the cleverest thing is that so much of it is actually being set up and told to us in seasons 1 and 2 before we even realize that that's what's happening.
When we first see Hordak in the show, he's giving "generic evil overlord" vibes. Garden-variety baddie. Maybe a little more reasonable than some and clearly capable of long-term thinking, but that just serves to make him intimidating. Everything about him--the way he runs his empire, his armor, his color scheme, his minion, his Villainous Eye Makeup(TM), even his name--are all projecting to the audience "yup, Acme Bad Guy here. Move right along."
But then, backstory. And everything snaps into focus. Not only is it one of the first big oh SHIT moments of the show, where we suddenly zoom out and realize that there is SO much more going on than we realized--it's also the start of the audience seeing Hordak as a character rather than an archetype. Suddenly we realize that he's not conquering Etheria because he wants power, or hates happiness and sparkles, or whatever--he's doing it out of a desperate attempt to prove his worth to his brother/creator/god. This moment where Hordak lets Entrapta in is also the moment the show lets us in on what makes our favorite spacebat tick.
On top of that, we've also seen him bonding with Entrapta and opening up to this person that he respects and trusts...probably the only person he's ever respected or trusted apart from Prime. And she's Etherian--someone of a lower species, someone he's supposed to subjugate, someone who he has been raised and trained and programmed and mind-controlled into believing is below him in every way.
But instead she's brilliant and creative and mesmerizing. She's not afraid of him, and she's fascinated with his work. For the first time since being abandoned by Prime, Hordak finally has someone that he can talk to, who is on his level and both understands and cares about the science! (because he is a giant nerd). She's kind to him, a mere defect. And it just sends his whole worldview into a spin, and that's all before--

Bam, mans is a goner. Entrapta's "Imperfections are beautiful" comment punches right through all the toxic bs that Hordak has been steeped in his entire life. You can see on his face here--I think it's the moment Hordak fell in love with Entrapta, but this is also the face of a spacebat reevaluating his entire worldview. If Entrapta, who is amazing, believes something different from Prime...what does that mean? If Entrapta, who is brilliant, believes that he is worth something, and that she herself is a failure...
Well. We know what happens after that, and how Hordak begins to doubt, and eventually fights back against Prime (and remembers his love for Entrapta after TWO mind wipes help my heart ack). But we also get to see what life in the Galactic Horde looks like: the only life Hordak ever knew before coming to Etheria.
It's not nice.
It's really not nice.
Prime operates in a very specific way, and we learn a lot about it in season 5. Prime expects complete obedience, devotion and worship from his clones. He allows no individuality from his subjects, not even a name. Failure or deviations are punished, mind-wiped, or destroyed. We even learn from Wrong Hordak that facial expressions are considered a privilege reserved for Prime (apart from, presumably, expressions of rapture caused by being around Prime).
And once we learn all of this, suddenly thinking about season 1 Hordak becomes very interesting indeed. The time we spend with the Galactic Horde and Prime throws absolutely everything that we know about Hordak into a whole new context. Now all those traits that made him a generic villain are actually hugely effective characterization! And what that characterization is telling us is that Hordak had already moved much farther away from Prime than we (or, probably, he) had realized, even long before he met Entrapta.
Horde Prime does not allow his underlings to have names, personalities, or any differences of appearance. Not only does Hordak allow this among his own troops, he chose a name for himself as well! Season 5 tells us that his very name is an act of blasphemy against his god. And yet Hordak took one for himself, and that name is part of the core identity he is able to hold on to when rebelling against Prime.
Horde Prime cast Hordak out when he showed signs of physical imperfections. Hordak not only keeps Imp (who is by all appearances a failed clone or similar experiment) around, he treats Imp more gently than we see him treat anybody or anything before Entrapta. Imp is not simply "generic evil guy's minion," he is proof of Hordak's capacity for compassion, and evidence that Hordak cannot bring himself to cast aside "defects" as easily as Prime. Considering where Hordak came from, Imp's existence is a huge, flashing neon sign telling the audience this guy here is better than the hell that molded him, and we don't even realize it until 4 seasons after it's been shown to us!

Very cool, ND.
There's more, though. Hordak's red and black color scheme? His dark eye makeup and lipstick? Very Evil Overlord chic. But nope! Actually these are actually expressions of individuality on a level that Hordak knows would be abhorrent to Prime!
Reading between the lines, I see this as Hordak desperately trying to reconcile two diametrically opposed beliefs in his head: (1) devotion to Prime, whose approval he desperately craves, and (2) maintaining some degree of unique personhood, of Hordak, from which to draw strength. Because a failed, defective clone cannot survive on a hostile world, cut off from the hivemind and from Prime's light. A failed clone cannot create an empire to offer Prime as tribute, nor build a spacetime portal from scraps and memory to call Prime back. A failed clone cannot create cybernetic armor to keep his hurting, weakened body alive; to force himself to keep going no matter what, to fight through the pain and the doubt by sheer force of will.
But maybe Hordak can.
And so there it is. Hordak had plenty of time to gain and explore his individuality while separated from Prime, but I think the reason he did it so effectively (while still deluding himself that Prime would forgive him for these little sins, if only Hordak could prove his value) is because he had to.
Wrong Hordak gained his individuality surrounded by kind, quirky people who took care of him; Hordak was ripped from the hivemind by Prime himself and had to fight for his survival against all odds. And that produced a dangerous and damaging foe for Etheria. But it also produced the one clone with the strength of will to defy Prime himself.
This is long and rambling, but ultimately my point is that 1) I love Hordak, and 2) I love love love love that the show was so clever about his characterization. We learn so much about him and how much progress he's already made in breaking from his psycho abusive cult upbringing, and we don't even recognize it until the show wants us to. Hordak had come so far, all on his own, before he met Entrapta. She just helped push him over the edge and finally realize (at least consciously) that Prime's worldview might not be the correct one.
Idk, I just don't know if I've ever seen all the trappings of Basic 80's Villain(TM) so successfully subverted, where looking back 4 seasons later is actually a smack in the face with the "effective character building" stick. Amazing.

#spop#she ra#she ra and the princesses of power#hordak#entrapdak#entrapta#horde prime#Spacebat#Deep character analysis#Gotta love clever writing#Seriously I could go on about this show for ages#I just love the characterization for everyone but especially Hordak#Best spacebat#I mean I love Wrong Hordak too but you know
2K notes
·
View notes
Text
Generative AI Is Bad For Your Creative Brain
In the wake of early announcing that their blog will no longer be posting fanfiction, I wanted to offer a different perspective than the ones I’ve been seeing in the argument against the use of AI in fandom spaces. Often, I’m seeing the arguments that the use of generative AI or Large Language Models (LLMs) make creative expression more accessible. Certainly, putting a prompt into a chat box and refining the output as desired is faster than writing a 5000 word fanfiction or learning to draw digitally or traditionally. But I would argue that the use of chat bots and generative AI actually limits - and ultimately reduces - one’s ability to enjoy creativity.
Creativity, defined by the Cambridge Advanced Learner’s Dictionary & Thesaurus, is the ability to produce or use original and unusual ideas. By definition, the use of generative AI discourages the brain from engaging with thoughts creatively. ChatGPT, character bots, and other generative AI products have to be trained on already existing text. In order to produce something “usable,” LLMs analyzes patterns within text to organize information into what the computer has been trained to identify as “desirable” outputs. These outputs are not always accurate due to the fact that computers don’t “think” the way that human brains do. They don’t create. They take the most common and refined data points and combine them according to predetermined templates to assemble a product. In the case of chat bots that are fed writing samples from authors, the product is not original - it’s a mishmash of the writings that were fed into the system.
Dialectical Behavioral Therapy (DBT) is a therapy modality developed by Marsha M. Linehan based on the understanding that growth comes when we accept that we are doing our best and we can work to better ourselves further. Within this modality, a few core concepts are explored, but for this argument I want to focus on Mindfulness and Emotion Regulation. Mindfulness, put simply, is awareness of the information our senses are telling us about the present moment. Emotion regulation is our ability to identify, understand, validate, and control our reaction to the emotions that result from changes in our environment. One of the skills taught within emotion regulation is Building Mastery - putting forth effort into an activity or skill in order to experience the pleasure that comes with seeing the fruits of your labor. These are by no means the only mechanisms of growth or skill development, however, I believe that mindfulness, emotion regulation, and building mastery are a large part of the core of creativity. When someone uses generative AI to imitate fanfiction, roleplay, fanart, etc., the core experience of creative expression is undermined.
Creating engages the body. As a writer who uses pen and paper as well as word processors while drafting, I had to learn how my body best engages with my process. The ideal pen and paper, the fact that I need glasses to work on my computer, the height of the table all factor into how I create. I don’t use audio recordings or transcriptions because that’s not a skill I’ve cultivated, but other authors use those tools as a way to assist their creative process. I can’t speak with any authority to the experience of visual artists, but my understanding is that the feedback and feel of their physical tools, the programs they use, and many other factors are not just part of how they learned their craft, they are essential to their art.
Generative AI invites users to bypass mindfully engaging with the physical act of creating. Part of becoming a person who creates from the vision in one’s head is the physical act of practicing. How did I learn to write? By sitting down and making myself write, over and over, word after word. I had to learn the rhythms of my body, and to listen when pain tells me to stop. I do not consider myself a visual artist - I have not put in the hours to learn to consistently combine line and color and form to show the world the idea in my head.
But I could.
Learning a new skill is possible. But one must be able to regulate one’s unpleasant emotions to be able to get there. The emotion that gets in the way of most people starting their creative journey is anxiety. Instead of a focus on “fear,” I like to define this emotion as “unpleasant anticipation.” In Atlas of the Heart, Brene Brown identifies anxiety as both a trait (a long term characteristic) and a state (a temporary condition). That is, we can be naturally predisposed to be impacted by anxiety, and experience unpleasant anticipation in response to an event. And the action drive associated with anxiety is to avoid the unpleasant stimulus.
Starting a new project, developing a new skill, and leaning into a creative endevor can inspire and cause people to react to anxiety. There is an unpleasant anticipation of things not turning out exactly correctly, of being judged negatively, of being unnoticed or even ignored. There is a lot less anxiety to be had in submitting a prompt to a machine than to look at a blank page and possibly make what could be a mistake. Unfortunately, the more something is avoided, the more anxiety is generated when it comes up again. Using generative AI doesn’t encourage starting a new project and learning a new skill - in fact, it makes the prospect more distressing to the mind, and encourages further avoidance of developing a personal creative process.
One of the best ways to reduce anxiety about a task, according to DBT, is for a person to do that task. Opposite action is a method of reducing the intensity of an emotion by going against its action urge. The action urge of anxiety is to avoid, and so opposite action encourages someone to approach the thing they are anxious about. This doesn’t mean that everyone who has anxiety about creating should make themselves write a 50k word fanfiction as their first project. But in order to reduce anxiety about dealing with a blank page, one must face and engage with a blank page. Even a single sentence fragment, two lines intersecting, an unintentional drop of ink means the page is no longer blank. If those are still difficult to approach a prompt, tutorial, or guided exercise can be used to reinforce the understanding that a blank page can be changed, slowly but surely by your own hand.
(As an aside, I would discourage the use of AI prompt generators - these often use prompts that were already created by a real person without credit. Prompt blogs and posts exist right here on tumblr, as well as imagines and headcannons that people often label “free to a good home.” These prompts can also often be specific to fandom, style, mood, etc., if you’re looking for something specific.)
In the current social media and content consumption culture, it’s easy to feel like the first attempt should be a perfect final product. But creating isn’t just about the final product. It’s about the process. Bo Burnam’s Inside is phenomenal, but I think the outtakes are just as important. We didn’t get That Funny Feeling and How the World Works and All Eyes on Me because Bo Burnham woke up and decided to write songs in the same day. We got them because he’s been been developing and honing his craft, as well as learning about himself as a person and artist, since he was a teenager. Building mastery in any skill takes time, and it’s often slow.
Slow is an important word, when it comes to creating. The fact that skill takes time to develop and a final piece of art takes time regardless of skill is it’s own source of anxiety. Compared to @sentientcave, who writes about 2k words per day, I’m very slow. And for all the time it takes me, my writing isn’t perfect - I find typos after posting and sometimes my phrasing is awkward. But my writing is better than it was, and my confidence is much higher. I can sit and write for longer and longer periods, my projects are more diverse, I’m sharing them with people, even before the final edits are done. And I only learned how to do this because I took the time to push through the discomfort of not being as fast or as skilled as I want to be in order to learn what works for me and what doesn’t.
Building mastery - getting better at a skill over time so that you can see your own progress - isn’t just about getting better. It’s about feeling better about your abilities. Confidence, excitement, and pride are important emotions to associate with our own actions. It teaches us that we are capable of making ourselves feel better by engaging with our creativity, a confidence that can be generalized to other activities.
Generative AI doesn’t encourage its users to try new things, to make mistakes, and to see what works. It doesn’t reward new accomplishments to encourage the building of new skills by connecting to old ones. The reward centers of the brain have nothing to respond to to associate with the action of the user. There is a short term input-reward pathway, but it’s only associated with using the AI prompter. It’s designed to encourage the user to come back over and over again, not develop the skill to think and create for themselves.
I don’t know that anyone will change their minds after reading this. It’s imperfect, and I’ve summarized concepts that can take months or years to learn. But I can say that I learned something from the process of writing it. I see some of the flaws, and I can see how my essay writing has changed over the years. This might have been faster to plug into AI as a prompt, but I can see how much more confidence I have in my own voice and opinions. And that’s not something chatGPT can ever replicate.
151 notes
·
View notes
Text
A Moment of Stillness
Pairing: Sylus/Reader
Summary:
In a rare moment of silence, Sylus can't help but wonder why his partner hasn't said a word
Masterlist
Word count: 993
A/N: This idea hasn't left my head in weeks and I love this fic sm. Reader is an artist!
The room held a quiet kind of tension, broken only by the faint rustle of paper and the occasional scratch of charcoal against parchment. Y/n couldn’t help but notice how the dim light from the window caught the silver strands of Sylus’ hair, casting soft highlights along its length. Something was mesmerizing about the way he held himself, completely absorbed in his work, the sharp focus in his red eyes making it feel as though he existed in a world separate from hers. She could study him for hours and still never fully grasp the layers behind that calm exterior.
She shifted slightly on the chaise, repositioning herself to get a better angle. Sylus didn’t budge, didn’t acknowledge her presence at all. His gaze remained fixed on the document, his fingers moving with the quiet precision of someone who had memorized the rhythm of every task they undertook. It was as though he were a machine, programmed only to work, to think, to achieve. At this moment, he wasn’t the man she knew—he was simply the version of him that existed when the world was too loud for anything else.
And yet, despite the stillness, there was an almost palpable energy in the atmosphere.
“You’ve been quiet,” he remarked his tone a mix of observation and mild curiosity. “Longer than usual.”
His fingers turned another page, but there was no immediate shift in his posture, no sign that he expected her to respond. He seemed unfazed as if whatever silence had settled between them was just another fleeting moment in their shared existence.
Y/n lifted a brow. “And that’s a problem?”
“You? Silent? Highly suspicious.”
She smirked but remained focused on her work, the soft drag of charcoal against paper continuing.
He glanced at her briefly before turning back to his work. “Should I be worried?”
“No.”
After a moment, Sylus shifted again. “If you’re plotting my demise, at least be creative about it.”
Y/n hummed thoughtfully. “Noted.”
Another beat of silence passed before Sylus finally turned his head slightly, catching the edge of her gaze. “Alright, what are you up to?”
Y/n clicked her tongue. “Now you’ve ruined it.”
His frown deepened. “Ruined what?”
She lifted the sketchbook slightly. “My sketch.”
Sylus blinked, then looked at the parchment in her hands. His side profile was captured in fine, careful strokes, the shadows accentuating the sharp angles of his features. His expression flickered—something unreadable, caught between intrigue and unease. Setting his paperwork aside, he moved to the chaise, his arm effortlessly pulling her against him as she shifted onto his lap. He took the sketchbook delicately in his hands, studying the piece.
Red eyes traced every detail with a quiet reverence, fingers ghosting along the edge of the parchment, careful to avoid smudging the lines. Y/n watched as he memorized it, committing it to the same careful vault in his mind where he kept every small thing about her.
She toyed with the edge of her sleeve, waiting for him to say something—anything. Sylus remained silent, expression composed, the same neutrality he offered the documents he had been reading moments before. Still, she knew him too well. The way his thumb lingered on the page, just a mere second too long. His grip on the parchment, both careful and firm, as if he weren’t quite ready to let go. A small, yet identifiable glimmer in his eyes, akin to the look he gave her when he thought she wasn’t watching. He liked it. He wouldn’t say it, not outright, but she knew he did.
“You’re doing that thing again,” Y/n mused, tilting her head against his shoulder, her fingers tracing idle patterns along his arm.
Sylus hummed, tilting his head slightly to rest on hers. His eyes remained trained on her drawing. “What thing?”
She nudged her chin toward the parchment in his hands. “Pretending you’re not touched by something when you absolutely are.”
His gaze flicked to her then, sharp but laced with unmistakable fondness. “I don’t pretend.”
She scoffed, amusement dancing in her eyes. “Oh, please. You’re holding that sketch like it’s the most precious thing you’ve ever seen.”
Sylus huffed a quiet laugh, shaking his head, but the barest hint of colour dusted across his cheeks. He finally placed the sketchbook down beside them, his arm wrapping fully around her waist, pulling her closer to him.
“Maybe I just like the way you see me,” he admitted, voice lower now, softer, like he was saying something he rarely allowed himself to.
Y/n’s breath hitched slightly, the warmth of his words curling around her. While his physical affection was something she was very familiar with, his words of affection were few and far between. When he did find his voice in these matters, it was with intention—every sentiment carefully chosen, irreplaceable.
She barely had a chance to respond before Sylus pressed his lips to her forehead. It was slow, lingering, a gesture of his love for her. His free hand came up, fingers threading gently through her hair as he kissed again, this time on her temple. A small smile played on her lips as she ran her fingers along the fabric of his shirt before resting them against his chest. She let her eyes close, savouring the warmth between them. “That’s more like it.”
“May I keep it?” Sylus cupped her cheek, thumb lightly brushing along her skin.
“After I colour it in.”
His lips twitched like he wanted to argue, but instead, he just sighed, shaking his head. “Fine. Just don’t take too long.”
Before he could pretend to be unaffected, she caught his lips with hers, silencing any further protest he might have had. He didn’t fight it—not even for a second. Instead, he kissed her like she was the only thing in the world that mattered as if the moment itself was something worth preserving.
177 notes
·
View notes
Text
Young women, never believe people who say a bachelor's degree isn't valuable. Degrees in the liberal arts and humanities are VERY important. You can do almost anything with an art history degree depending on your willingness to be creative with work opportunities. That expensive piece of paper is your access to a network of professionals in your cohort and decades of alumnae, leads for jobs/internships, it vouches for you as a successful and goal oriented person, connects you directly to the guidance and advice of experts in your field, and trains you to both work hard towards high achievement and to think critically within elevated humanist principles. and you absolutely cannot get any of that on tiktok and tumblr and linkedin. Go to a junior college, night school, apply for transfer, special programs, whatever!
158 notes
·
View notes
Note
What's your opinion on the Ao3 being scraped? Will you lock your fics?
Eugh as I'm sure you can imagine, I'm pretty annoyed about it.
I think mainly because fic authors don't get paid diddly-squat for their efforts. Every story on the archive is a labor of love that is posted there for nothing more than the joy of creation and sharing our works. And to just have it stolen to train some AI program is honestly so insulting and sucks the heart and soul out of author's creations.
I cracked a few jokes about it in 'The Secrets of our Quills,' but being from the Bay Area, I genuinely loathe tech culture. Like actually hate it. This whole surge of AI from tech bros and its infringement on creative spaces is so obnoxious. Like just because you (these tech yahoos) don't have the capacity to create doesn't mean that it's okay to steal from those who DO have that ability and have your stupid program spit out some hollow, soulless, cheap replica. And like, to take free works and use them to generate profit and revenue is so stoopie and infuriating because like dawg, I also hate business bro grindset culture, something that is prevalent with a lot of boneheads in my class so idk I'm kind of being a hater rn so I'll stop HAHA
Anyways, in response to locking my fics, I won't. I think a significant portion of my reader base are people who don't have accounts, and I don't want to punish people for tech/business bro buffoonery. But yeah, thanks for caring enough about my opinion to ask!
116 notes
·
View notes
Note
I remember a post you made about how Sam is good at flavoring mechanics story wise, like singing his bardic inspirations, and a different post you made about how Marisha has seemed to struggle a bit with character motivations, and also how Laura was being conflict averse this campaign. What do you think each player’s biggest strengths and weaknesses are?
I think I may have answered this elsewhere but it's certainly not tagged in a useful manner and I am currently on a train meandering through the dumb state of Connecticut so:
Sam is as mentioned very good at knowing what makes a good story and specifically good entertainment. However, I think he tends to be one of the cast members who most wants very clear GM guidance; his boldest move was in fact one that required GM involvement and pre-approval. Honestly these are two sides of the same coin; caring about the audience is good, but caring too much can be an issue.
Marisha, yeah, I think tends to lean towards very loose/find it along the way character concepts and I think she actually does better when there is a stronger structure for her character. I'm going to be honest - prior to Campaign 3 I'd say her strength was interpersonal relationships, and to be fair the interpersonal aspect in C3 for everyone was kind of a mess so it's not specifically her (and if future works are good I'll write it off as just One Bad Campaign/specific to one character and return to this as her strength), but as is, not so much as a player, but I do genuinely believe she is excellent at creative direction. I think her switch to pandemic programming was one of the strongest and smoothest I saw in the actual play industry (granted, limited) and I think most shows CR has done that haven't gone over well have been issues of scheduling or uh. fandom entitlement, more than any missteps on her part.
Laura's weakness is definitely nonconfrontation/worrying that she is doing the right thing. (She and Sam are not dissimilar in that, but Sam counterbalances it by embracing failure and she also struggles with letting a character fail). Marisha benefits from structure, as with Keyleth, and Laura benefits from having a character that either lets her turn those anxieties off, like Jester, or who leans into that being the character's fear, like Vex. Her strength is that she is one of the strongest actors in the cast if not the strongest (Laura, Travis, and Ashley tend to top my personal list in terms of sheer acting chops). Even when I've found her characters frustrating I've found her acting compelling, hence what I said about soap operas that one time.
Liam's weakness is, and this is extremely a personal preference - all are, but whereas I can make a semi-objective case for many of the others this is just me, being sappy as hell. I had difficulty with Vax for this precise reason while still generally enjoying the character's motivation and arc; Liam is in my opinion at his best when he deliberately goes for more restrained or antagonistic characters. Like, there's a time to be big and cheesy (eg, the final scene switch of friends around a table in the Chicago live show) but my taste is more sparing perhaps than most (for metaphorical cheese. not for real cheese). His strength I think is also kind of the flip of this coin; he is exceptionally collaborative. I think it's no coincidence that the twins and Caleb and Veth are two of the most enduring duos of "characters who came in together" or that he's managed to do successful romances with NPCs or with a guest actor; during C1 and C2 he was really good at drawing in Ashley when she returned from extended absences.
Taliesin's strength is that he has some of the most interesting and weird character concepts that lend humanity to people who would often be denied it by a narrative - the creator of a horrible weapon; someone literally without a soul; a gutter punk - and he commits to them whole-heartedly, even the uglier parts. I think his weakness is honestly kind of similar to Matt's DM weakness, which is that he straight up has maybe a completely random chance of properly clocking someone else's character's motivations. Like, either he absolutely gets it (eg, Vex) or he says things on Talks or 4SD about other people's characters that make me go "????" and then the actor for the other character goes "????" and I'm like oh ok I'm not wrong. (This perhaps most easily demonstrated with Shardgate, which, great moment, absolutely tops, but the fact that Taliesin the Player thought Matt was doing anything BUT signaling "DON'T FUCKING DO THIS" is ????? to me and always will be; I cannot see how he could have made that more clear.)
Travis, frankly, just Gets It, like, I think the Age of Umbra session zero is demonstrative of him just being able to immediately get to the core of a work. He's strong mechanically, he's strong as an actor, he is able to generate plot hooks from pretty much anything (RIP sidequests from Novos, in a different campaign you would have been great), and he is unafraid to take big swings. He's definitely made choices I am personally less into, but honestly my only real criticism is that he sometimes plays a more jokey character in between the Fjord, Cerrit, Nathaniel types that I prefer (and even then, Grog and Chetney go at least five times harder then their concepts would imply, and it is an error to dismiss them as jokes).
Ashley is also as mentioned a very strong actor, and I also think she is unheralded as a worldbuilder for her characters; Pike, Yasha, and Fearne all have characters or locations associated with them who, even when she's had limited screentime or the story has followed other paths, feel incredibly real. I also think that she's grown a lot mechanically over the course of C3 and shows a lot of promise and I'm interested in seeing what she does with Daggerheart. I think she can be indecisive; as mentioned, I don't really blame her in C3 for a number of reasons not to mention she does a great job of integrating that as a character concept, but I really do want to see her make bolder moves.
105 notes
·
View notes
Text
The Surucuá community in the state of Pará is the first to receive an Amazonian Creative Laboratory, a compact mobile biofactory designed to help kick-start the Amazon’s bioeconomy.
Instead of simply harvesting forest-grown crops, traditional communities in the Amazon Rainforest can use the biofactories to process, package and sell bean-to-bar chocolate and similar products at premium prices.
Having a livelihood coming directly from the forest encourages communities to stay there and protect it rather than engaging in harmful economic activities in the Amazon.
The project is in its early stages, but it demonstrates what the Amazon’s bioeconomy could look like: an economic engine that experts estimate could generate at least $8 billion per year.
In a tent in the Surucuá community in the Brazilian Amazonian state of Pará, Jhanne Franco teaches 15 local adults how to make chocolate from scratch using small-scale machines instead of grinding the cacao beans by hand. As a chocolatier from another Amazonian state, Rondônia, Franco isn’t just an expert in cocoa production, but proof that the bean-to-bar concept can work in the Amazon Rainforest.
“[Here] is where we develop students’ ideas,” she says, gesturing to the classroom set up in a clearing in the world’s greatest rainforest. “I’m not here to give them a prescription. I want to teach them why things happen in chocolate making, so they can create their own recipes,” Franco tells Mongabay.
The training program is part of a concept developed by the nonprofit Amazônia 4.0 Institute, designed to protect the Amazon Rainforest. It was conceived in 2017 when two Brazilian scientists, brothers Carlos and Ismael Nobre, started thinking of ways to prevent the Amazon from reaching its impending “tipping point,” when deforestation turns the rainforest into a dry savanna.
Their solution is to build a decentralized bioeconomy rather than seeing the Amazon as a commodity provider for industries elsewhere. Investments would be made in sustainable, forest-grown crops such as cacao, cupuaçu and açaí, rather than cattle and soy, for which vast swaths of the forest have already been cleared. The profits would stay within local communities.
A study by the World Resources Institute (WRI) and the New Climate Economy, published in June 2023, analyzed 13 primary products from the Amazon, including cacao and cupuaçu, and concluded that even this small sample of products could grow the bioeconomy’s GDP by at least $8 billion per year.
To add value to these forest-grown raw materials requires some industrialization, leading to the creation of the Amazonian Creative Laboratories (LCA). These are compact, mobile and sustainable biofactories that incorporate industrial automation and artificial intelligence into the chocolate production process, allowing traditional communities to not only harvest crops, but also process, package and sell the finished products at premium prices.
The logic is simple: without an attractive income, people may be forced to sell or use their land for cattle ranching, soy plantations, or mining. On the other hand, if they can make a living from the forest, they have an incentive to stay there and protect it, becoming the Amazon’s guardians.
“The idea is to translate this biological and cultural wealth into economic activity that’s not exploitative or harmful,” Ismael Nobre tells Mongabay."
-via Mongabay News, January 2, 2024
#amazon#amazon rainforest#rainforest#chocolate#sustainability#ethical food#brazil#natural resources#good news#hope
286 notes
·
View notes
Note
Somehow (Gaia knows how) Sephiroth managed to get a cult following. Like a genuine bonafide cult in the SOLDIER program that's quickly spreading to the infantry and other Shinra personnel. Sephiroth isn't sure where it started, but he's not complaining; the cult is less weird than the Silver Elite
He's touched. Unlike the Silver Elite, these people actually respect him, not some glorified hero Shinra carefully sculpted for public consumption. He even attends their meetings—where they mostly just talk about life and training—and offers advice to the younger ones. There's a sense of community, of belonging. It's nice. He doesn't understand why Lazard is standing in front of his desk at 9 AM on a Monday, looking like he's aged ten years overnight.
Lazard: Sephiroth, I need you to explain something to me.
Sephiroth: Of course.
Lazard: Why do you have a cult?
Sephiroth: I don't have a cult.
Lazard: Really? Because I have reports of people gathering in secret at 3 AM to chant your name in a candlelit conference room.
Sephiroth: That's just team bonding.
Lazard: A grunt tried to walk into the training room blindfolded because, quote, "I have faith in Sephiroth, and Sephiroth will guide my blade."
Sephiroth: That's just confidence. I encourage confidence in my men.
Lazard: They built a statue of you at the entrance.
Sephiroth: They're creative. I support artistic expression.
Lazard: They called it "The Great Ascension."
Sephiroth: Dramatic, but not technically illegal. We're doing nothing wrong, director. You're acting as if I'm forcing these people to do my evil bidding.
*Zack enters, wearing ceremonial robes*
Zack: Hey boss! The boys wanna know when we're kidnapping Hojo to sacrifice him to the Cosmic Mother for your peace of mind.
Sephiroth: I was thinking maybe after lunch.
Zack: Cool I'll let 'em know.
*Zack leaves*
Lazard: ………
#ff7#ffvii#final fantasy 7#sephiroth#final fantasy vii#ff7 crisis core#crisis core#crisis core headcanons
67 notes
·
View notes
Text
AI “art” and uncanniness

TOMORROW (May 14), I'm on a livecast about AI AND ENSHITTIFICATION with TIM O'REILLY; on TOMORROW (May 15), I'm in NORTH HOLLYWOOD for a screening of STEPHANIE KELTON'S FINDING THE MONEY; FRIDAY (May 17), I'm at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.
When it comes to AI art (or "art"), it's hard to find a nuanced position that respects creative workers' labor rights, free expression, copyright law's vital exceptions and limitations, and aesthetics.
I am, on balance, opposed to AI art, but there are some important caveats to that position. For starters, I think it's unequivocally wrong – as a matter of law – to say that scraping works and training a model with them infringes copyright. This isn't a moral position (I'll get to that in a second), but rather a technical one.
Break down the steps of training a model and it quickly becomes apparent why it's technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive shouldn't exist, then you should support scraping at scale:
https://pluralistic.net/2023/09/17/how-to-think-about-scraping/
And unless you think that Facebook should be allowed to use the law to block projects like Ad Observer, which gathers samples of paid political disinformation, then you should support scraping at scale, even when the site being scraped objects (at least sometimes):
https://pluralistic.net/2021/08/06/get-you-coming-and-going/#potemkin-research-program
After making transient copies of lots of works, the next step in AI training is to subject them to mathematical analysis. Again, this isn't a copyright violation.
Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia:
https://www.theguardian.com/books/2009/apr/03/agatha-christie-alzheimers-research
Programmatic analysis of scraped online speech is also critical to the burgeoning formal analyses of the language spoken by minorities, producing a vibrant account of the rigorous grammar of dialects that have long been dismissed as "slang":
https://www.researchgate.net/publication/373950278_Lexicogrammatical_Analysis_on_African-American_Vernacular_English_Spoken_by_African-Amecian_You-Tubers
Since 1988, UCL Survey of English Language has maintained its "International Corpus of English," and scholars have plumbed its depth to draw important conclusions about the wide variety of Englishes spoken around the world, especially in postcolonial English-speaking countries:
https://www.ucl.ac.uk/english-usage/projects/ice.htm
The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software:
https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech/
Are models infringing? Well, they certainly can be. In some cases, it's clear that models "memorized" some of the data in their training set, making the fair use, transient copy into an infringing, permanent one. That's generally considered to be the result of a programming error, and it could certainly be prevented (say, by comparing the model to the training data and removing any memorizations that appear).
Not every seeming act of memorization is a memorization, though. While specific models vary widely, the amount of data from each training item retained by the model is very small. For example, Midjourney retains about one byte of information from each image in its training data. If we're talking about a typical low-resolution web image of say, 300kb, that would be one three-hundred-thousandth (0.0000033%) of the original image.
Typically in copyright discussions, when one work contains 0.0000033% of another work, we don't even raise the question of fair use. Rather, we dismiss the use as de minimis (short for de minimis non curat lex or "The law does not concern itself with trifles"):
https://en.wikipedia.org/wiki/De_minimis
Busting someone who takes 0.0000033% of your work for copyright infringement is like swearing out a trespassing complaint against someone because the edge of their shoe touched one blade of grass on your lawn.
But some works or elements of work appear many times online. For example, the Getty Images watermark appears on millions of similar images of people standing on red carpets and runways, so a model that takes even in infinitesimal sample of each one of those works might still end up being able to produce a whole, recognizable Getty Images watermark.
The same is true for wire-service articles or other widely syndicated texts: there might be dozens or even hundreds of copies of these works in training data, resulting in the memorization of long passages from them.
This might be infringing (we're getting into some gnarly, unprecedented territory here), but again, even if it is, it wouldn't be a big hardship for model makers to post-process their models by comparing them to the training set, deleting any inadvertent memorizations. Even if the resulting model had zero memorizations, this would do nothing to alleviate the (legitimate) concerns of creative workers about the creation and use of these models.
So here's the first nuance in the AI art debate: as a technical matter, training a model isn't a copyright infringement. Creative workers who hope that they can use copyright law to prevent AI from changing the creative labor market are likely to be very disappointed in court:
https://www.hollywoodreporter.com/business/business-news/sarah-silverman-lawsuit-ai-meta-1235669403/
But copyright law isn't a fixed, eternal entity. We write new copyright laws all the time. If current copyright law doesn't prevent the creation of models, what about a future copyright law?
Well, sure, that's a possibility. The first thing to consider is the possible collateral damage of such a law. The legal space for scraping enables a wide range of scholarly, archival, organizational and critical purposes. We'd have to be very careful not to inadvertently ban, say, the scraping of a politician's campaign website, lest we enable liars to run for office and renege on their promises, while they insist that they never made those promises in the first place. We wouldn't want to abolish search engines, or stop creators from scraping their own work off sites that are going away or changing their terms of service.
Now, onto quantitative analysis: counting words and measuring pixels are not activities that you should need permission to perform, with or without a computer, even if the person whose words or pixels you're counting doesn't want you to. You should be able to look as hard as you want at the pixels in Kate Middleton's family photos, or track the rise and fall of the Oxford comma, and you shouldn't need anyone's permission to do so.
Finally, there's publishing the model. There are plenty of published mathematical analyses of large corpuses that are useful and unobjectionable. I love me a good Google n-gram:
https://books.google.com/ngrams/graph?content=fantods%2C+heebie-jeebies&year_start=1800&year_end=2019&corpus=en-2019&smoothing=3
And large language models fill all kinds of important niches, like the Human Rights Data Analysis Group's LLM-based work helping the Innocence Project New Orleans' extract data from wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
So that's nuance number two: if we decide to make a new copyright law, we'll need to be very sure that we don't accidentally crush these beneficial activities that don't undermine artistic labor markets.
This brings me to the most important point: passing a new copyright law that requires permission to train an AI won't help creative workers get paid or protect our jobs.
Getty Images pays photographers the least it can get away with. Publishers contracts have transformed by inches into miles-long, ghastly rights grabs that take everything from writers, but still shifts legal risks onto them:
https://pluralistic.net/2022/06/19/reasonable-agreement/
Publishers like the New York Times bitterly oppose their writers' unions:
https://actionnetwork.org/letters/new-york-times-stop-union-busting
These large corporations already control the copyrights to gigantic amounts of training data, and they have means, motive and opportunity to license these works for training a model in order to pay us less, and they are engaged in this activity right now:
https://www.nytimes.com/2023/12/22/technology/apple-ai-news-publishers.html
Big games studios are already acting as though there was a copyright in training data, and requiring their voice actors to begin every recording session with words to the effect of, "I hereby grant permission to train an AI with my voice" and if you don't like it, you can hit the bricks:
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
If you're a creative worker hoping to pay your bills, it doesn't matter whether your wages are eroded by a model produced without paying your employer for the right to do so, or whether your employer got to double dip by selling your work to an AI company to train a model, and then used that model to fire you or erode your wages:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
Individual creative workers rarely have any bargaining leverage over the corporations that license our copyrights. That's why copyright's 40-year expansion (in duration, scope, statutory damages) has resulted in larger, more profitable entertainment companies, and lower payments – in real terms and as a share of the income generated by their work – for creative workers.
As Rebecca Giblin and I write in our book Chokepoint Capitalism, giving creative workers more rights to bargain with against giant corporations that control access to our audiences is like giving your bullied schoolkid extra lunch money – it's just a roundabout way of transferring that money to the bullies:
https://pluralistic.net/2022/08/21/what-is-chokepoint-capitalism/
There's an historical precedent for this struggle – the fight over music sampling. 40 years ago, it wasn't clear whether sampling required a copyright license, and early hip-hop artists took samples without permission, the way a horn player might drop a couple bars of a well-known song into a solo.
Many artists were rightfully furious over this. The "heritage acts" (the music industry's euphemism for "Black people") who were most sampled had been given very bad deals and had seen very little of the fortunes generated by their creative labor. Many of them were desperately poor, despite having made millions for their labels. When other musicians started making money off that work, they got mad.
In the decades that followed, the system for sampling changed, partly through court cases and partly through the commercial terms set by the Big Three labels: Sony, Warner and Universal, who control 70% of all music recordings. Today, you generally can't sample without signing up to one of the Big Three (they are reluctant to deal with indies), and that means taking their standard deal, which is very bad, and also signs away your right to control your samples.
So a musician who wants to sample has to sign the bad terms offered by a Big Three label, and then hand $500 out of their advance to one of those Big Three labels for the sample license. That $500 typically doesn't go to another artist – it goes to the label, who share it around their executives and investors. This is a system that makes every artist poorer.
But it gets worse. Putting a price on samples changes the kind of music that can be economically viable. If you wanted to clear all the samples on an album like Public Enemy's "It Takes a Nation of Millions To Hold Us Back," or the Beastie Boys' "Paul's Boutique," you'd have to sell every CD for $150, just to break even:
https://memex.craphound.com/2011/07/08/creative-license-how-the-hell-did-sampling-get-so-screwed-up-and-what-the-hell-do-we-do-about-it/
Sampling licenses don't just make every artist financially worse off, they also prevent the creation of music of the sort that millions of people enjoy. But it gets even worse. Some older, sample-heavy music can't be cleared. Most of De La Soul's catalog wasn't available for 15 years, and even though some of their seminal music came back in March 2022, the band's frontman Trugoy the Dove didn't live to see it – he died in February 2022:
https://www.vulture.com/2023/02/de-la-soul-trugoy-the-dove-dead-at-54.html
This is the third nuance: even if we can craft a model-banning copyright system that doesn't catch a lot of dolphins in its tuna net, it could still make artists poorer off.
Back when sampling started, it wasn't clear whether it would ever be considered artistically important. Early sampling was crude and experimental. Musicians who trained for years to master an instrument were dismissive of the idea that clicking a mouse was "making music." Today, most of us don't question the idea that sampling can produce meaningful art – even musicians who believe in licensing samples.
Having lived through that era, I'm prepared to believe that maybe I'll look back on AI "art" and say, "damn, I can't believe I never thought that could be real art."
But I wouldn't give odds on it.
I don't like AI art. I find it anodyne, boring. As Henry Farrell writes, it's uncanny, and not in a good way:
https://www.programmablemutter.com/p/large-language-models-are-uncanny
Farrell likens the work produced by AIs to the movement of a Ouija board's planchette, something that "seems to have a life of its own, even though its motion is a collective side-effect of the motions of the people whose fingers lightly rest on top of it." This is "spooky-action-at-a-close-up," transforming "collective inputs … into apparently quite specific outputs that are not the intended creation of any conscious mind."
Look, art is irrational in the sense that it speaks to us at some non-rational, or sub-rational level. Caring about the tribulations of imaginary people or being fascinated by pictures of things that don't exist (or that aren't even recognizable) doesn't make any sense. There's a way in which all art is like an optical illusion for our cognition, an imaginary thing that captures us the way a real thing might.
But art is amazing. Making art and experiencing art makes us feel big, numinous, irreducible emotions. Making art keeps me sane. Experiencing art is a precondition for all the joy in my life. Having spent most of my life as a working artist, I've come to the conclusion that the reason for this is that art transmits an approximation of some big, numinous irreducible emotion from an artist's mind to our own. That's it: that's why art is amazing.
AI doesn't have a mind. It doesn't have an intention. The aesthetic choices made by AI aren't choices, they're averages. As Farrell writes, "LLM art sometimes seems to communicate a message, as art does, but it is unclear where that message comes from, or what it means. If it has any meaning at all, it is a meaning that does not stem from organizing intention" (emphasis mine).
Farrell cites Mark Fisher's The Weird and the Eerie, which defines "weird" in easy to understand terms ("that which does not belong") but really grapples with "eerie."
For Fisher, eeriness is "when there is something present where there should be nothing, or is there is nothing present when there should be something." AI art produces the seeming of intention without intending anything. It appears to be an agent, but it has no agency. It's eerie.
Fisher talks about capitalism as eerie. Capital is "conjured out of nothing" but "exerts more influence than any allegedly substantial entity." The "invisible hand" shapes our lives more than any person. The invisible hand is fucking eerie. Capitalism is a system in which insubstantial non-things – corporations – appear to act with intention, often at odds with the intentions of the human beings carrying out those actions.
So will AI art ever be art? I don't know. There's a long tradition of using random or irrational or impersonal inputs as the starting point for human acts of artistic creativity. Think of divination:
https://pluralistic.net/2022/07/31/divination/
Or Brian Eno's Oblique Strategies:
http://stoney.sb.org/eno/oblique.html
I love making my little collages for this blog, though I wouldn't call them important art. Nevertheless, piecing together bits of other peoples' work can make fantastic, important work of historical note:
https://www.johnheartfield.com/John-Heartfield-Exhibition/john-heartfield-art/famous-anti-fascist-art/heartfield-posters-aiz
Even though painstakingly cutting out tiny elements from others' images can be a meditative and educational experience, I don't think that using tiny scissors or the lasso tool is what defines the "art" in collage. If you can automate some of this process, it could still be art.
Here's what I do know. Creating an individual bargainable copyright over training will not improve the material conditions of artists' lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.
As an artist, I'm foursquare against anything that stands in the way of making art. As an artistic worker, I'm entirely committed to things that help workers get a fair share of the money their work creates, feed their families and pay their rent.
I think today's AI art is bad, and I think tomorrow's AI art will probably be bad, but even if you disagree (with either proposition), I hope you'll agree that we should be focused on making sure art is legal to make and that artists get paid for it.
Just because copyright won't fix the creative labor market, it doesn't follow that nothing will. If we're worried about labor issues, we can look to labor law to improve our conditions. That's what the Hollywood writers did, in their groundbreaking 2023 strike:
https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/
Now, the writers had an advantage: they are able to engage in "sectoral bargaining," where a union bargains with all the major employers at once. That's illegal in nearly every other kind of labor market. But if we're willing to entertain the possibility of getting a new copyright law passed (that won't make artists better off), why not the possibility of passing a new labor law (that will)? Sure, our bosses won't lobby alongside of us for more labor protection, the way they would for more copyright (think for a moment about what that says about who benefits from copyright versus labor law expansion).
But all workers benefit from expanded labor protection. Rather than going to Congress alongside our bosses from the studios and labels and publishers to demand more copyright, we could go to Congress alongside every kind of worker, from fast-food cashiers to publishing assistants to truck drivers to demand the right to sectoral bargaining. That's a hell of a coalition.
And if we do want to tinker with copyright to change the way training works, let's look at collective licensing, which can't be bargained away, rather than individual rights that can be confiscated at the entrance to our publisher, label or studio's offices. These collective licenses have been a huge success in protecting creative workers:
https://pluralistic.net/2023/02/26/united-we-stand/
Then there's copyright's wildest wild card: The US Copyright Office has repeatedly stated that works made by AIs aren't eligible for copyright, which is the exclusive purview of works of human authorship. This has been affirmed by courts:
https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/
Neither AI companies nor entertainment companies will pay creative workers if they don't have to. But for any company contemplating selling an AI-generated work, the fact that it is born in the public domain presents a substantial hurdle, because anyone else is free to take that work and sell it or give it away.
Whether or not AI "art" will ever be good art isn't what our bosses are thinking about when they pay for AI licenses: rather, they are calculating that they have so much market power that they can sell whatever slop the AI makes, and pay less for the AI license than they would make for a human artist's work. As is the case in every industry, AI can't do an artist's job, but an AI salesman can convince an artist's boss to fire the creative worker and replace them with AI:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
They don't care if it's slop – they just care about their bottom line. A studio executive who cancels a widely anticipated film prior to its release to get a tax-credit isn't thinking about artistic integrity. They care about one thing: money. The fact that AI works can be freely copied, sold or given away may not mean much to a creative worker who actually makes their own art, but I assure you, it's the only thing that matters to our bosses.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
#pluralistic#ai art#eerie#ai#weird#henry farrell#copyright#copyfight#creative labor markets#what is art#ideomotor response#mark fisher#invisible hand#uncanniness#prompting
272 notes
·
View notes
Text
IP law as it is currently framed seems to me to be about outputs: is the thing you made obviously a duplicate of a competing product? Is the thing you made taking important and original features that make it obviously a ripoff? Regardless of what the inputs are, if the thing you make or sell bears no resemblance to the original or isn’t even the same kind of thing, it’s not really a violation of IP rights. If I sell a table that consists of nothing but hashes of the individual sentences of Dune according to a proprietary algorithm that’s not (as far as I know) copyright violation. It’s not a competing product, it’s not even vaguely recognizably the same. It’s certainly not masquerading as a novel by Frank Herbert.
And this makes sense insofar as IP law is a fairly recent creation in historical terms designed to promote the arts and sciences. It’s a set of commercial incentives, basically. Reframing all of IP law around the idea of inputs would be a drastic renegotiation of what it is and what it is for and it would radically transform the jurisprudential philosophy behind it as a body of law: does this mean pastiche is now illegal? Is it illegal if your work is obviously inspired by Dune? Should it be? I could certainly see how big rightsholders might want to lobby to make even such inspiration illegal, since if you’re buying indie superhero comic books maybe you’re not buying Marvel or DC ones.
But IP law doesn’t exist to prevent alteration or remix or modification of art. It doesn’t exist to prevent training computer programs on the public internet. I think in order to use IP law as an instrument to do that, you would have to break it, to make it do something it was not fundamentally designed to do, and you would open it up to become a vehicle of a legal regime far worse than anything the CEO of Disney could imagine in his darkest dreams right now.
Also, you know, as someone who writes and draws, I think the idea is morally objectionable! If you want your writing and art to be immune from giving inspiration or being altered in any way or being part of the great conversation of human culture, do it in private and burn it afterward. But putting it online and then complaining that people look at it, download it, do stuff with it beyond passively look at it seems to me to miss the point of creativity.
169 notes
·
View notes
Text

@starrrgazingbunny
Right??! Imagine how bright that red looks in a world that's all white and grey. White walls white furniture white floors. RED uniform, red blood.

I saw this addition on a post and I can't stop thinking about it in relation to Kamino and the clones? A lot of white just isn't good for humans generally (like thinking about. Things from a 'enriching enclosure' type thing. No colour is so far away from what we evolved with. Like even without the socialisation to know that it isn't right. Biologically, being human, how does that affect a person mentally?). Thinking about the fact that the first thing they do when they ship out is paint their armour in all these colours. On Kamino, inside is white, outside is grey (largely. Their education models likely had colour in. As did the tools they were given (and weapons). The food likely had some colour. But imagine 90% of your life being white or grey. Every wall every door.)
Imagine how busy everything would seem, when you step on to another planet and see all that colour all at once.
Thinking about the parallel there between Kamino being so white, and Anakin on tattoine. Everything some shade of beige, cream or yellow. His clothes, the sand, the buildings, the sun's burning a yellow so bright you can't look at them. And then Anakin going to Naboo. Their extravagant fashion. All the different colours. It's funny that two of the most influential people in Anakin's life are from Naboo. Long after Obi Wan is gone, palpatine is still pulling the puppet strings, padme is still haunting him. I wonder if it was padme he fell in love with first, or her planet. Her life.
#Always think of how strange that must have been#to only ever see black hair and then to see a completely different colour on your body#do you think they saw their trainers hair? i reckon a lot of them wore helmets#and yeah. the creativity in their armour paint and murals/wing art kills me every time#cause that wouldn't have been part of their training program? it helps with identification in the field obviously#but if the Kamineese thought it important their armour would have been produced already marked up#art is such an inate human thing. breaks my heart to see that expression. they are so human!!!
25 notes
·
View notes
Text
The Boys Preference: Being An Assassin Who Joins The team
A/N: I'M OBSESSED WITH THIS IDEA :D I have so many ideas attached to it, so many posts brewing, so I really hope you like it!!! I kinda think of it similar to Red Room from MCU and also the Aunts from The Handmaid's Tale, if that makes any sense lol. Feedback is always appreciated 💜💜💜
Butcher likes you. He sees the emotion you evoke from the rest of the team and he thinks you're a perfect fit. You're not sure what to think of him. If he was one of your siblings, he wouldn't have lasted long. Selfish, arrogant, self-righteous. That's the kind of thing that got you punished, that got you killed. Beneath it though, to a degree, you can tell he really cares for everyone. It might be twisted and warped and at this point unrecognizable, but it was there. He enjoys hearing about your kills, especially when it was Supes. You weren't just good at what you did, you were the best. You were creative, too. Imaginative. He brags to you about killing Translucent, how they did it. You're not terribly impressed, but for his benefit you put on a show. You're a little weird, but he likes that. You're except in some areas (like going undercover) and mediocre in others (like figuring out how to befriend Hughie). He doesn't judge what you've done. It's just how you were raised. He tries to do a background check on you, but there's nothing. The name your mentors gave you wasn't the one your parents, if there even were parents, gave you. You were a blank slate. It was both riveting and terribly dangerous.
Hughie has the most questions. He can see just from your appearance, all the scars on your face and neck, all the ones he can't see, that you've been to hell and back. You hold yourself rigid, tight. Even when you seem relaxed you aren't. You're constantly looking for the nearest exit or weapon, scanning every room you walk into. It spooks him a little. He lets his imagination get the better of him, something he knows he shouldn't do, but just can't help it. You like Hughie instantly. And not just because he's too awkward and frail to get in a proper punch, too soft to ever truly hurt you. He seems sweet, naive, like he needs protecting. He reminds you of the kids in the program who didn't make it. You protected them, too. Or, at least tried to. You're as friendly as you let yourself, taking an interest in whatever he's doing, becoming his shadow. Everyone takes notice, but he doesn't seem to mind. He likes your company. The rest of the team hopes you'll open up to him, tell him what you won't tell everyone else, but he refuses to pry. If you talk, that's great. If not, oh well. If you want to hang out by his side, that works too.
Annie has nothing against you, but you definitely keep your distance, especially at first. You've killed more than enough Supes to prove your competency, more than you can name. You're not sure what they tell each other, but you imagine it similar to the system you grew up with: word spread quickly, you all felt it when one of your own were killed. There was an alliance that went unsaid. If you could avenge your fallen siblings, you would. If she found out who you killed, how many, would she come after you? Eventually you learn they're not all connected like that, that Annie's on your side. Still, you kind of see her as the embodiment of everything you're not. She's sweet, caring, and honest. You've been lying all your life, you can't tell what's real and what isn't. Hughie likes her, loves her, so that definitely helps in developing your relationship. Annie knows about your past, what little you share of it, but she doesn't judge. Maybe, at a time, she would have, but after being part of The Boys so long, that kind of thing kind of loses its shock power. You did what you had to, what you were trained to. Weren't you all guilty of a version of that?
M.M., similar to his initial feelings about Kimiko, isn't too fond of you. He doesn't mean to judge as harshly as he does, but just by the looks of you, you mean trouble. Hughie tries to talk to him, but he just can't get past your quirks. You're so naive about certain things (what music you like to listen to, shows you've never seen, how to form normal friendships, what jokes are funny) and so knowledgeable about other things (the fastest way to bleed out a man, how to make a murder look like a suicide, the amount of languages you were taught to better go after your targets). It just doesn't sit right with him. Knowing this, sensing this, you keep your distance, knowing not to further upset him similar to how your mentors were. Be invisible to him, them. It isn't until you give him sound advice for protecting Monique and Janine, something he never would have thought of, does he reconsider his feelings. He's still not a big fan, but he can see why you belong on the team, why your skills are beneficial, even if some of the stuff you say so lightly gives him the heebie jeebies, like the time you reminisced about killing someone with just a wooden spoon.
Frenchie doesn't really see you as an assassin. They've all killed people, it didn't seem like such a big deal. He doesn't love the idea of you being around Kimiko. She's made a life for herself beyond what she's gone through. It feels like you're still learning how to be without it. Without your mentors, your siblings. He knows there's no one better to give you a chance than him, so he's very open, inviting. You talk to him exclusively in French. You tell him small parts of your past, and he's grateful for that. In return, he tells you about his own childhood. When he shares the scars from his father, you tell him about the ones on your face and neck, how you deserved them for disobedience. He doesn't tell anyone else, knowing it was only meant for him to hear. You even speak affectionately about your mentors, the ones who were kind and only hurt you when you needed it. He wasn't shocked, at least not outwardly, not wanting you to feel strange or odd. Because you don't speak French with an accent, it's hard for him to decipher where you're from. All over, you say, and though you know it's a non-answer, it's the truth. You've been all over the world. You just happened to end up in New York.
Kimiko becomes your friend immediately. Though you gravitate towards Hughie because he's sweet, you like Kimiko because you can tell you're very similar. She doesn't have to say anything, you just know. You recognize the signs. The rest of the team doesn't think it's a great idea, you are alone with her, namely Frenchie. When you aren't cold and standoffish, you're far too casual about what you've done, pointing to old movies with famous Supes back in the day or old politicians, reminiscing how you killed them, made it look like a suicide. Or you talk about growing up, how you were punished for crying even when your friends were killed, pointing out the scars they left. She's not upset by it, she's glad you're talking about it. It makes her upbringing feel normal. You learn sign language quickly, another language you can add to your list, telling her more than anyone else. In return you listen to her, whatever she wants to share, grateful for someone who doesn't look at you like a monster or a freak. You like listening to her go on about Frenchie, her feelings for him. It's a piece of childhood you never got to take part in. It's nice.
#preference#headcanon#billy butcher#billy butcher x reader#hughie campbell#hughie campbell x reader#annie january#annie january x reader#mm#mm x reader#marvin milk#marvin milk x reader#frenchie#frenchie x reader#kimiko miyashiro#kimiko miyashiro x reader#the boys#the boys x reader#series#asassin!reader
244 notes
·
View notes