Tumgik
#slushbot
literaticat · 5 months
Note
Regarding the AI-to-sort-through-submissions question, a major publisher recently opened (briefly) to submissions and were going to use an AI called Storywise to sort their subs. There was a massive backlash on social media and they rescinded the idea, got them a lot of bad press. Not sure if you saw that controversy. So I wonder if the person who asked you that question also saw that or if some agencies/other publishers are now doing this too.
I hadn't heard of this controversy, so I googled it, and came up with this article recapping a thing where Angry Robot was going to use Storywise to sort through submissions and got a lot of pushback. (Are they a major publisher? Is this what you were referring to? Or was there another thing?)
The interesting / wild thing about this to me is that... this doesn't sound like a big deal, actually? They put an FAQ about it (which they have left up for transparency) -- in reading the article above and the FAQ, if they are to be believed, it seems that AR were using a non-generative AI program that was NOT being trained on or retaining any of the author's work.
This "slushbot" was going to filter submissions to the appropriate editor and flag any anomalies (like, "doesn't fit requested word count" etc).
All submissions, whether or not they were "flagged" in this way, were being looked at by an actual human. (Because slushbot can make mistakes!).
This... doesn't sound that egregious to me? In fact, it sounds super reasonable? AM I BONKS? Am I missing something?? Maybe!
Things I -- and possibly you! -- use every day that have similar features:
GMAIL. Uses AI to filter out spam, highlight important messages, designate promotions and ads to different folders, remind me to follow up on things, etc. Does it make mistakes? Sure, sometimes. I do have to check the spam filter now and again to make sure it isn't flagging important things wrongly. Does it make my life infinitely easier? YES. I get hundreds of emails a day -- if I had to look at all the trash ones, I would NEVER find the important ones!
SPELLCHECK / AUTOCORRECT / AUTOCOMPLETE / GRAMMARLY, et al. Use AI to tell me when I have misspelled something or to suggest wording. Is it always right? No! I never mean DUCK, spellcheck! Does it make my life easier? For sure! I have fat fingers, this saves me from many a gaffe!
"IF YOU LIKE ____, TRY _____" -- you know how online retailers often suggest things based on your buying preferences, or give you a bargain on things they know you enjoy to get you to re-buy, etc? You know how TikTok and Netflix push content that they think you'll be into based on your viewing habits? Or you'll just be chatting about beekeepers and suddenly you have ads for bee merch on your Facebook? That's the magic of the algorithms, babes. And they are everywhere.
QUERYMANAGER. I don't know if this technically counts as "AI" -- but QM does do things like flag submissions that meet certain criteria that I've designated (like, if it is somebody who is previously published, if it is a referral, etc) -- it also tells me the history of submissions, so I know if they've queried the agency before and when and to whom and with what material. (This is how I KNOW FOR A FACT that so many people don't follow the DUCKING directions.) And, while currently I have to manually forward submissions to other agents if they are more appropriate for them, I could see a world where that was automated -- like if all queries came to a central repository and QM2.0 forwarded them to the specific agents at the agency who were open to [whatever] kind of book. Would that be a bad thing? Or a HELPFUL thing? IDK.
These are just tools, at the end of the day. I don't think there's anything morally wrong with using them or trying them out as long as you aren't letting them make actual decisions for you. Like, I would never want a slushbot to reject things (or accept them!) on my behalf - I have to be the one to look, just as I would never want an algorithm to purchase things on my behalf just because I MIGHT like them -- no thank you! By all means suggest, but I have to make the final decision!
(TL;DR: Using spellcheck on your work to make sure you haven't called me KENNIFER? Yes please! "Creating" work that you intend to submit with your name on it using generative AI? Please, no. Using an AI filter to flag submissions and sort them to the correct person? Sounds OK! Stealing author's work to train an AI? Not OK!)
6 notes · View notes
literaticat · 5 months
Note
I saw a comment that agents are using AI to sift through queries. Is this true and is this allowed? Wouldn’t that put unpublished work owned by that author into the public domain? I know lawyers are currently being sanctioned for putting private client info into AI to help with cases. This isn’t as high stakes, but it does put the author’s work at risk of being stolen. Thoughts?
There's a lot to unpack here!
Saw a comment... where? From whom? I have never heard of this, personally, so while I'm not saying it's untrue... well, I don't think it's NORMAL PRACTICE or anything, at all. Just on its face, that sounds sus to me. (For one thing, I don't think most agents would even know HOW to "use AI" to write a birthday card, let alone how to program some kind of slush-robot and trust it to accurately gauge our mercurial preferences!)
I can't speak to the lawyers being sanctioned or whatnot piece, idk about that, so let's focus on issue at hand. Let's say I somehow got hold of an AI tool that would sort through my slush pile and only show me the things it deemed most likely to sell. A SLUSH ROBOT!
First, it sorting through slush wouldn't "put your work into the public domain" -- (words mean things!) -- Your work that you personally created is, in the US, anyway, under copyright protection the moment it leaves your brain and is fixed in a tangible medium, you don't even need to register for copyright (though you or your publisher will do so when it is published just for an additional level of protection!) Copyrighted work is NOT in the public domain, that's why there are lawsuits from authors suing various tech companies who used their work to train their AI. So, if I were to use this tool to TRAIN the AI, I'd be abusing your copyright (unethically in my opinion, but I guess the legality is for the courts to decide!) -- but that would NOT render your copyright null and void or put it into the public domain.
But because this hypothetical slush robot is probably a program that I bought or got from someplace, not something I've invented and am training myself, it isn't using any data it gleans from me to train off of -- it's already been trained. For example, the GPT in Chat GPT stands for "generative pre-trained transformer" -- it uses the billion-whatever things it was trained on to generate new things / make predictions / whatever.
I talked to my friend and client Martha Brockenbrough about this (her excellent book on AI, FUTURE TENSE, is out now and is required reading if you are interested in this topic but don't know much about it!). I asked her for an example of how a tool like this might work and she replied:
"For example, companies that get a lot of applicants may use AI to narrow the field. The tool isn't training the model -- it is using an algorithm trained on resumes to look at the new one. AI looks at patterns and makes decisions. Just like people do, but in some cases faster / better / worse. So, when you turn in a resume to a company, are you worried that the HR person is going to steal your resume or leak info?"
Basically, this worry is akin to one that has been around forever, which is the concern that if you query an agent they will "steal your ideas" -- which, a) we don't want your ideas, actually, they are worthless on their own, and b) stealing them would actually be quite a bit harder/more complicated than just representing you and making money off them in the normal fashion, and c) don't query an agent you don't trust, I guess!
4 notes · View notes