#input optimization
Explore tagged Tumblr posts
Text
Agribusiness Talk: Understanding Seed Costs and Their Impact on Farming
Explore how seed costs impact farming profitability. Understand the breakdown of seed costs for crops like tomatoes, onions, maize, and cabbages, and learn strategies to optimize yields for better returns. Wondering if seed costs are limiting your farming success? Dive into an in-depth analysis of seed costs and discover how maximizing yields can reduce their impact on your overall profits. Learn…
#Agribusiness#agricultural economics#agricultural inputs#agronomic practices#Cabbage farming#chemical costs#cost of seeds#crop investment#crop spacing#crop value#crop yields#farm gate prices#farm management#farming expenses#farming profitability#farming strategies#fertilizer costs#high-yield farming#hybrid seeds#input optimization#maize production#onion farming#seed costs#seed price analysis#seedling cost percentage#seedling germination#seedling transplanting#sustainable farming#tomato seedlings#Yield Optimization
0 notes
Text
i started replaying hollow knight on pc last night and it's a fuckin trip my guys it's somewhat easier on pc and in some ways also extremely difficult
#hollow knight#specifically it's the learning to fight on a keyboard#rather than with my drifty nintendo joysticks#i had configure the inputs as well bc they were not optimal for my hand eye coordination
10 notes
·
View notes
Note
How does your thing work??? I thought at first you were reading the post in as a string and doing like a key-dictionary thing to match a pattern with a Pokémon name but it didn’t do it with the Ask containing the pokérap. I am v curious and would love to hear about it if you have time
Pokemon detected :
How does your thing work??? I thought at first you were reading the post in as a string and doing like a key-dictionary thing to match a pattern with a Pokémon name but it didn’t do it with the Ask containing the pokérap. I am v curious and would love to hear about it if you have time
Ralts !
#pokemon#pokemon detector#ask the detector#okay so basically#this code is bugggy as hell for a lot of pokemon with double letters lol#but#basically i have two functions#the first parses the text depending on a pokemons name#i have a name list#and it grabs the first name and goes letter by letter to see if it's in the right order in the text i input#then the code runs for every pokemon name in the list that's the second function#it's far from optimized but so long as no one sends me the entire bee movie script it takes maybe .5 secs to run#and it returns a random pokemon from the list of the pokemon names in the text and then i have a thing that tells me if a specific pokemon#is in the list#bc sometimes i wanna be funny but a 10 lines text will give me like 900 pokemon#you can dm me for the code if you want lol it's not something i hold close or wtv it took me maybe a half hour
13 notes
·
View notes
Text
played all the trials and safe to say natlan dps just feel. Clunky as a whole huh
#mine.txt#like i can tell varesa wants specific combos so thats def a big reason why playing her feels clunky rn#but like. alhaitham has optimal combos too but i dont feel like im moving through sand trying to do his attacks#i think with varesa its the lag between input and hit thats messing with me#bc xiao and gaming feel pretty speedy during gameplay
5 notes
·
View notes
Text
Ok gonna rant for a bit in here. I've spent about 10-12 hours at this point working on a little game for a school project. It is going pretty well, all things considered. HOWEVER.
The code doesn't seem to run some of the rooms. I have five rooms, I have made it very clear in-code that the order is start room -> prep room one -> part one -> prep room 2 -> part two -> back to the start room.
Start room runs, but only sometimes does it change to prep room one when I tell it to. SOMETIMES it works just fine. Sometimes it doesn't go at all. There aren't any apparent problems with the code. Part one does not run at all. Again, I've detected no problems with the code and the game doesn't abort itself when it tries to run. It just says "entering main game loop" and never opens up the window. Everything AFTER part one runs fine (when I put it at the top of the hierarchy for bug testing). But I can't run part one, and that is a major problem because that's the room that's going to require a LOT more bug testing.
Part two? Gonna need a little tweaking to make it ideal, but it's very straightforward and doesn't take too much code. Both prep rooms and the start menu only have a few visuals and ONE way for the player to interact with them (clicking to the next room). But part one is much more complicated and I have no way to bug test it or optimize the gameplay. I don't know where the issue is. I'm going to commit homicide.
#ranting#gamemaker#I'm optimizing part two right now. BUT MY PART ONNNNNE#also I just realized I'm gonna have to make a list for the timers. grrrrrr I HATE lists#I'm going to make backgrounds and get music and sound effects between now and next class#so I can like. input them#yadda yadda you know#I'm trying to be efficient
2 notes
·
View notes
Text
Professor Layton speedruns are so funny because there's not really anything impressive skill-wise like a Hollow Knight speedrun or glitch-wise like a Mario 64 speedrun. Answer my puzzles and mash through dialogue fast boy
#like theres a level of planning like which puzzles are the fastest and easiest to input and#which puzzles are the fastest to reach for puzzle solve count#as well as memorization of which answers and doing things like setting the language to something that's faster like japanese#but it doesnt have a huge speedrunning community so i have no idea how optimized runs are#i should get into speedrunning professor layton#unfortunately speedrunning it kinda ruins replayability#bc the games are pretty replayable after some time because you dont remember all the puzzle answers#but speedrunning it forces you to know#book of kells
12 notes
·
View notes
Text
been using caffeine as a sensory intensifier instead of as stimulation on its own, and thats made it easier to apply it more efficiently. ive also figured out hunger is another intensifier. been so effective in moderating my attn span and energy. feels like i just found the second half the owners manual
#when i'm right up on the edge of Optimal Sensory Input its harder to focus on small things because i need to get into a monotropic tunnel#i don't think out loud. i'm more coordinated and i move faster. tumblr is boring.#also found that propranolol is good at bringing me a couple steps back from a meltdown if i get too overwhelmed - even if ive had coffee#REALLY BIG. getting too close to a meltdown usually ruins my day.#audhd
1 note
·
View note
Text
.
#oh man I still hate the Malenia fight which is a shame#I love her character#but the fact that she will just stand there for 30 seconds#not doing anything bc the ai is waiting for any input from you#is the most annoying shit#and then the hit on heal fuck off#rellana is just as quick but her fight is optimized so well
1 note
·
View note
Text
Tech Breakdown: What Is a SuperNIC? Get the Inside Scoop!

The most recent development in the rapidly evolving digital realm is generative AI. A relatively new phrase, SuperNIC, is one of the revolutionary inventions that makes it feasible.
Describe a SuperNIC
On order to accelerate hyperscale AI workloads on Ethernet-based clouds, a new family of network accelerators called SuperNIC was created. With remote direct memory access (RDMA) over converged Ethernet (RoCE) technology, it offers extremely rapid network connectivity for GPU-to-GPU communication, with throughputs of up to 400Gb/s.
SuperNICs incorporate the following special qualities:
Ensuring that data packets are received and processed in the same sequence as they were originally delivered through high-speed packet reordering. This keeps the data flow’s sequential integrity intact.
In order to regulate and prevent congestion in AI networks, advanced congestion management uses network-aware algorithms and real-time telemetry data.
In AI cloud data centers, programmable computation on the input/output (I/O) channel facilitates network architecture adaptation and extension.
Low-profile, power-efficient architecture that effectively handles AI workloads under power-constrained budgets.
Optimization for full-stack AI, encompassing system software, communication libraries, application frameworks, networking, computing, and storage.
Recently, NVIDIA revealed the first SuperNIC in the world designed specifically for AI computing, built on the BlueField-3 networking architecture. It is a component of the NVIDIA Spectrum-X platform, which allows for smooth integration with the Ethernet switch system Spectrum-4.
The NVIDIA Spectrum-4 switch system and BlueField-3 SuperNIC work together to provide an accelerated computing fabric that is optimized for AI applications. Spectrum-X outperforms conventional Ethernet settings by continuously delivering high levels of network efficiency.
Yael Shenhav, vice president of DPU and NIC products at NVIDIA, stated, “In a world where AI is driving the next wave of technological innovation, the BlueField-3 SuperNIC is a vital cog in the machinery.” “SuperNICs are essential components for enabling the future of AI computing because they guarantee that your AI workloads are executed with efficiency and speed.”
The Changing Environment of Networking and AI
Large language models and generative AI are causing a seismic change in the area of artificial intelligence. These potent technologies have opened up new avenues and made it possible for computers to perform new functions.
GPU-accelerated computing plays a critical role in the development of AI by processing massive amounts of data, training huge AI models, and enabling real-time inference. While this increased computing capacity has created opportunities, Ethernet cloud networks have also been put to the test.
The internet’s foundational technology, traditional Ethernet, was designed to link loosely connected applications and provide wide compatibility. The complex computational requirements of contemporary AI workloads, which include quickly transferring large amounts of data, closely linked parallel processing, and unusual communication patterns all of which call for optimal network connectivity were not intended for it.
Basic network interface cards (NICs) were created with interoperability, universal data transfer, and general-purpose computing in mind. They were never intended to handle the special difficulties brought on by the high processing demands of AI applications.
The necessary characteristics and capabilities for effective data transmission, low latency, and the predictable performance required for AI activities are absent from standard NICs. In contrast, SuperNICs are designed specifically for contemporary AI workloads.
Benefits of SuperNICs in AI Computing Environments
Data processing units (DPUs) are capable of high throughput, low latency network connectivity, and many other sophisticated characteristics. DPUs have become more and more common in the field of cloud computing since its launch in 2020, mostly because of their ability to separate, speed up, and offload computation from data center hardware.
SuperNICs and DPUs both have many characteristics and functions in common, however SuperNICs are specially designed to speed up networks for artificial intelligence.
The performance of distributed AI training and inference communication flows is highly dependent on the availability of network capacity. Known for their elegant designs, SuperNICs scale better than DPUs and may provide an astounding 400Gb/s of network bandwidth per GPU.
When GPUs and SuperNICs are matched 1:1 in a system, AI workload efficiency may be greatly increased, resulting in higher productivity and better business outcomes.
SuperNICs are only intended to speed up networking for cloud computing with artificial intelligence. As a result, it uses less processing power than a DPU, which needs a lot of processing power to offload programs from a host CPU.
Less power usage results from the decreased computation needs, which is especially important in systems with up to eight SuperNICs.
One of the SuperNIC’s other unique selling points is its specialized AI networking capabilities. It provides optimal congestion control, adaptive routing, and out-of-order packet handling when tightly connected with an AI-optimized NVIDIA Spectrum-4 switch. Ethernet AI cloud settings are accelerated by these cutting-edge technologies.
Transforming cloud computing with AI
The NVIDIA BlueField-3 SuperNIC is essential for AI-ready infrastructure because of its many advantages.
Maximum efficiency for AI workloads: The BlueField-3 SuperNIC is perfect for AI workloads since it was designed specifically for network-intensive, massively parallel computing. It guarantees bottleneck-free, efficient operation of AI activities.
Performance that is consistent and predictable: The BlueField-3 SuperNIC makes sure that each job and tenant in multi-tenant data centers, where many jobs are executed concurrently, is isolated, predictable, and unaffected by other network operations.
Secure multi-tenant cloud infrastructure: Data centers that handle sensitive data place a high premium on security. High security levels are maintained by the BlueField-3 SuperNIC, allowing different tenants to cohabit with separate data and processing.
Broad network infrastructure: The BlueField-3 SuperNIC is very versatile and can be easily adjusted to meet a wide range of different network infrastructure requirements.
Wide compatibility with server manufacturers: The BlueField-3 SuperNIC integrates easily with the majority of enterprise-class servers without using an excessive amount of power in data centers.
#Describe a SuperNIC#On order to accelerate hyperscale AI workloads on Ethernet-based clouds#a new family of network accelerators called SuperNIC was created. With remote direct memory access (RDMA) over converged Ethernet (RoCE) te#it offers extremely rapid network connectivity for GPU-to-GPU communication#with throughputs of up to 400Gb/s.#SuperNICs incorporate the following special qualities:#Ensuring that data packets are received and processed in the same sequence as they were originally delivered through high-speed packet reor#In order to regulate and prevent congestion in AI networks#advanced congestion management uses network-aware algorithms and real-time telemetry data.#In AI cloud data centers#programmable computation on the input/output (I/O) channel facilitates network architecture adaptation and extension.#Low-profile#power-efficient architecture that effectively handles AI workloads under power-constrained budgets.#Optimization for full-stack AI#encompassing system software#communication libraries#application frameworks#networking#computing#and storage.#Recently#NVIDIA revealed the first SuperNIC in the world designed specifically for AI computing#built on the BlueField-3 networking architecture. It is a component of the NVIDIA Spectrum-X platform#which allows for smooth integration with the Ethernet switch system Spectrum-4.#The NVIDIA Spectrum-4 switch system and BlueField-3 SuperNIC work together to provide an accelerated computing fabric that is optimized for#Yael Shenhav#vice president of DPU and NIC products at NVIDIA#stated#“In a world where AI is driving the next wave of technological innovation#the BlueField-3 SuperNIC is a vital cog in the machinery.” “SuperNICs are essential components for enabling the future of AI computing beca
1 note
·
View note
Text
AUFGH.
#no you have to understand...#CODING IS SO COOL.#i fucking made a thingy that GENERATES SENTENCES BASED ON USER INPUT. ME. i did that!!!#is it optimal or even functional??? probably not:))#BUT. WHO CARES.#this is so awesome oml.#YIPPEEE#sillyposting
0 notes
Text
the "problem" AI researchers have historically sought to solve is to simulate human intelligence. that is simultaneously the bedrock and the umbrella of that particular field. if we're using AI as a synonym for ML, then the goal can be sculpted further to simulating that human intelligence through relatively autonomous means. it illustrates a fundamental misunderstanding to throw this ~70 year old field of research out as something "invented" by silicon valley shareholders. why is it that moralistically blinded tumblr users treat ML as something magical (evil) in the same way that the valley reptilians they position themselves against treat it as something magical (good)?
You can't argue against a technology. No one has ever, ever, in the history of humanity, argued a technology out of existence. The closest we've come are nukes and human genetic engineering. Nukes exist and multiple countries have massive arsenals of them, but we've agreed not to use them because it would mean humanity's utter destruction. Human genetic engineering cuts right to the heart of a bunch of ethical questions about health, equality, identity, and so on, and also up until very recently genetic engineering has been a long and extremely expensive process. We'll see how long human genetic engineering remains taboo now that it's getting cheaper and easier. But these are absolute outliers. In the vast, vast majority of cases, I mean literally in virtually every single case, when people fight a new technology—for any reason—they loose.
There is no tenable "anti-AI art" position, just like there was never a tenable anti-loom position, or anti-railroad position, or anti-horseless carriage position. These things were doomed to fail absolutely from day one, as soon as the technology existed, and anti-AI art is doomed to fail just as utterly and completely. There is just no path here, if this is what you've hitched your wagon to I really do not know what to tell you.
#it's all built on top of an optimization problem#that is inherently amoral#can't begin with those thinking an (assuming gen) AI model needs to be constantly 'fed' input to continue existing#feed me seymour i forgot the results of my training#ai
4K notes
·
View notes
Text
My concepts for the development progress of an Iterators Puppet
-my ideas below
-Feasibility Study
[1]: First autonomous control module, any instruction to be given must be done manually through physical means (the keys), outputs were shown through the screen. A very primitive system, however, did its job proving the greater machine concept was achievable. While it does look like a lens above the monitor, this was a simple status gauge for benchmarking.
-Prototyping and Development
[2]: Now with the capability to wirelessly and audibly communicate to receive instructions and inputs. The system was no longer directly integrated into the facility, and resided on the first instance of an iterator's arm. This was considered a feat due to the complications with isolating the control module from the rest of the iterators components, while keeping processing power. A permanent connection/umbilical was needed to sustain life and function though.
To “talk” back, they were crafted with multidimensional projectors, the mobile arm allowing the angles and variance for this projection. Only later into development were advanced speakers installed for optimized understanding, however the extra computing power required to synthesize proper speech was found to strain the contained module, so this function had rare use in the end.
[3]: At this point there was a change in perspective in the project. What once were machines to simply compute and simulate, were now planned to be the home, caregiver, and providers. The further the project came to fruition the more religious importance was placed upon these “random gods”. From this stance not only did the puppets have to manage and control their facilities, they had to communicate with the people and priests. To represent benevolent beings who will bring their end and salvation. In this process iterators began to take a more humanoid shape, to better reflect their parents. Development was focused on compacting the puppet closer to the size of an ancient for this purpose. This stage was the first to incorporate a cloak/clothing into their design considerations, to further akin themselves in looks. The cloak would hide the iterators' engineered bodies and give a body to their silhouette.
[4]: As bioengineering and mechanics were rapidly progressing due to the void fluid revolution, this allowed plenty of margin for developing the outer design of the iterator puppets. This prototype was the first to incorporate limbs for the purpose of body language. This was another step in the drive to give a body to their random gods.
-Final Iterations
[5]: First generation iterators had the final redesign of puppet bodies. Far different from their first designs, they are fully humanoid. Their bodies are shaped to be organic and as full of life as they could at the time. Their center of sapience has fully settled within their body, as can be seen as their unconscious use of limbs without the direct intention for communication. This can also see how they manage their work, where many of the functions (which can be done with just an internal request) are operated through physical gestures of their limbs. Their puppet chambers also allow for full comprehensive projection, where many of their working monitors are displayed. It is seen how iterators prefer to utilize their traversal arm to transfer between the current working projection window.
These designs were hardy and nearly self-sufficient, only requiring minimal power from their umbilical to charge. (However was still limited in the terms of internal power production, for this first generation extensive batteries sufficed)
[6]: Later generation not only incorporated advanced bioengineering internally, but externally. While still a hardened shell, their body plates have been incorporated into the organics of the puppet, maintaining the protective requirements while barely leaving a trace of hinges or plates. This “soft” skin had drawbacks, such as reduced durability to the first generations, this was offset by the greatly enhanced repair speeds and capability this type of skin allowed.
Internal power generation was implemented into these late generation models. If the case arose, the Puppet could be disconnected from their umbilical and still be conscious from an undefined period of time. (However this would limit the operating capacity of the puppet when running self sufficiently) This greatly eased maintenance works, as the Puppet could still run the greater facility wirelessly while work was done on the chamber, arm or whatever as needed.
2K notes
·
View notes
Text
Making a game where you need to go on a quest to obtain a randomly generated passcode, then deliberately making it so that brute-forcing the passcode by spending several minutes trying every combination with optimal input speed is, on average, just slightly faster than the fastest possible completion of the quest to obtain it legitimately.
1K notes
·
View notes
Text
Rambling about Astarion bc im bored at work. I like Astarion because I think he is a genius take on The Evil RPG Companion, and is an especially great take on The Fixable Bad Guy. I don't think hes evil, but I do think Astarion is a genuinely bad person at the beginning, and I think Astarion is only drawn away from being a bad person - and experiences a great redemption arc - via active intervention from others. Astarion would not redeem himself without guidance; he is absolutely bent toward self destruction and evil at the beginning of the story.
I think comparing him with Shadowheart is what drew me to that conclusion. If you are nice to Shadowheart, as in you talk to her and respect her boundaries and do stuff she generally agrees with, she will choose to free Nightsong all on her own. You don't need to roll to convince her at all, or romance her or even push back on her Shar worship that much. You just leave it up to her, and she chooses that path. (Side note, what brilliant writing.)
Astarion is not like that at all. Even if you were tight as fuck he would not choose the good option, with no input, in Act 2. Astarion, like all the companions, needs help and connection to reach healthy actualization, but I think its great, resonant writing that Astarion needs the most active intervention of all. Because he's had his autonomy so completely taken away from him, he simply doesn't know how to use it anymore. He doesn't know how to connect with other people anymore. He's someone that's learned to enjoy cruelty, to resent the pleasure of others, and to be entirely selfish for survival. It makes sense that he must be dragged back into being capable of trust. He needs to be forced to be part of a community again; caring about things; allowing for vulnerability and optimism.
And like. How fucking smart is it to have THIS guy in THIS game. Because of the tadpole and the existential threat they're up against, he is actually forced to work with you. This kind of character is so hard to do in most RPGs because its like... why wouldn't he just betray you all and leave? Why would he stick with you? The tadpole clears all of that up. Astarion must stick with you or hes lost and dead. Astarion knows that you and the other companions are collectively stronger than him, so he can't betray you. He is forced to rely on you by default.
This is also what makes him SUCH a good version of the "you can fix him" romance; you are almost never the direct target of Astarion's bastardry because he can't fuck with you. The problem with Fix Him's is that usually they are a threat to the romantic lead, and fixing him requires enduring, soothing and forgiving the worst of his badness as some kind of test of loyalty, hopefully proving to him that being bad isn't necessary (toxic shit). But Astarion... can't do that. He is afraid to actually fuck you over because you are directly tied to his survival, and because you quickly show yourself to be more capable than him. He cannot have real power over you. (Until he's ascended, then he becomes the absolute worst version of the fix-it.)
I do think the trade off is that Astarion not directing his bastardry at you makes it easier to Ignore that Astarion is A Bad Guy, but I think that'd happen even if he was more of an asshole to you, so who cares. I think he's got the best written Redeemable Evil RPG Companion arch I've seen honestly. I love that he's so fun while being so tragic, whether redeemed or not.
1K notes
·
View notes
Text
☁︎。⋆。 ゚☾ ゚。⋆ how to resume ⋆。゚☾。⋆。 ゚☁︎ ゚
after 10 years & 6 jobs in corporate america, i would like to share how to game the system. we all want the biggest payoff for the least amount of work, right?
know thine enemy: beating the robots
i see a lot of misinformation about how AI is used to scrape resumes. i can't speak for every company but most corporations use what is called applicant tracking software (ATS).
no respectable company is using chatgpt to sort applications. i don't know how you'd even write the prompt to get a consumer-facing product to do this. i guarantee that target, walmart, bank of america, whatever, they are all using B2B SaaS enterprise solutions. there is not one hiring manager plinking away at at a large language model.
ATS scans your resume in comparison to the job posting, parses which resumes contain key words, and presents the recruiter and/or hiring manager with resumes with a high "score." the goal of writing your resume is to get your "score" as high as possible.
but tumblr user lightyaoigami, how do i beat the robots?
great question, y/n. you will want to seek out an ATS resume checker. i have personally found success with jobscan, which is not free, but works extremely well. there is a free trial period, and other ATS scanners are in fact free. some of these tools are so sophisticated that they can actually help build your resume from scratch with your input. i wrote my own resume and used jobscan to compare it to the applications i was finishing.
do not use chatgpt to write your resume or cover letter. it is painfully obvious. here is a tutorial on how to use jobscan. for the zillionth time i do not work for jobscan nor am i a #jobscanpartner i am just a person who used this tool to land a job at a challenging time.
the resume checkers will tell you what words and/or phrases you need to shoehorn into your bullet points - i.e., if you are applying for a job that requires you to be a strong collaborator, the resume checker might suggest you include the phrase "cross-functional teams." you can easily re-word your bullets to include this with a little noodling.
don't i need a cover letter?
it depends on the job. after you have about 5 years of experience, i would say that they are largely unnecessary. while i was laid off, i applied to about 100 jobs in a three-month period (#blessed to have been hired quickly). i did not submit a cover letter for any of them, and i had a solid rate of phone screens/interviews after submission despite not having a cover letter. if you are absolutely required to write one, do not have chatgpt do it for you. use a guide from a human being who knows what they are talking about, like ask a manager or betterup.
but i don't even know where to start!
i know it's hard, but you have to have a bit of entrepreneurial spirit here. google duckduckgo is your friend. don't pull any bean soup what-about-me-isms. if you truly don't know where to start, look for an ATS-optimized resume template.
a word about neurodivergence and job applications
i, like many of you, am autistic. i am intimately familiar with how painful it is to expend limited energy on this demoralizing task only to have your "reward" be an equally, if not more so, demoralizing work experience. i don't have a lot of advice for this beyond craft your worksona like you're making a d&d character (or a fursona or a sim or an OC or whatever made up blorbo generator you personally enjoy).
and, remember, while a lot of office work is really uncomfortable and involves stuff like "talking in meetings" and "answering the phone," these things are not an inherent risk. discomfort is not tantamount to danger, and we all have to do uncomfortable things in order to thrive. there are a lot of ways to do this and there is no one-size-fits-all answer. not everyone can mask for extended periods, so be your own judge of what you can or can't do.
i like to think of work as a drag show where i perform this other personality in exchange for money. it is much easier to do this than to fight tooth and nail to be unmasked at work, which can be a risk to your livelihood and peace of mind. i don't think it's a good thing that we have to mask at work, but it's an important survival skill.
⋆。゚☁︎。⋆。 ゚☾ ゚。⋆ good luck ⋆。゚☾。⋆。 ゚☁︎ ゚。⋆
641 notes
·
View notes
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes
·
View notes