Tumgik
#i've been working ten hours straight on anthology stuff and i'm exhausted
duckprintspress · 3 years
Note
I'm confused about the scoring system. It was said on the website that people were scored (numerically) and yet someone with much higher scores than I got didn't get accepted nor was offered the mentoring/beta/get to know discord.
I can't help but feel there is an error here. Is there a way to explain this?
Hi anon!
So there are a few things going on here, and I’ll try to explain them all.
Thing the First:
The scores on the rubrics are raw scores. Once we really got down-and-dirty with rating things, and I did some data analysis, I discovered that some of our raters were being a bit more generous than others. By the time we were done with all the scoring, there was nearly a 2 point split between the most generous and least generous scorer, on average. This was clearly a fairness issue - if one writer got our three most generous raters, and a second writer got our three least generous raters, even if they submitted identical stories, the second writer would get a lower rating. Rather than leave things like that, I did some research on statistical models that would standardize our ratings.
Basically, with the help of an excel algorithm (the “STANDARDIZE” function), we calculated the average that each rater gave, and that rater’s deviation from the mean (standard deviation), and plugged that in to convert their ratings from the 0 to 20 scale to a different, standard scale - which ran roughly from -2.5 to positive 2.5. What the standardization really did was - for each rater, whatever score they personally gave on average was 0 (so, if rater one’s average score was a 15...that was standardized to zero. And if rater two’s average score was 10...that was also standardized to zero. So now instead of comparing apples to oranges, we were comparing apples to apples). Then, if they rated a fic 1 standard deviation above or below their average, that became a 1 or a -1. If they rated it 2 standard deviations above or below their average, that became a 2 or a -2. And so on. This ensured that all of the stories were rated on the same scale and everyone was treated equally - standardization of test scores erased the differences caused by some people rating more strictly.
So, if you and your friends are comparing rubrics (which you’re welcome to do!) and you noticed that some of you did better, numerically, but had different outcomes, that’s likely a factor - one of you may have had more generous or more strict readers.
(Here’s some more information on how Standardization, here’s the very basics...there are other forms of standardizing, such as fitting folks to a bell curve, or curving the entire score, but those were not suited to our needs)
Thing the Second:
The top 20 authors, statistically adjusted for fairness, were invited to contribute to Add Magic to Taste. As it turned out, these top 20 DID roughly correlate to our top 20 by raw scores; comparing the two rankings, there are only two people out of the top 20 who wouldn’t have made it based on raw scores but did when their scores were adjusted to be standardized against the same scale.
The next 20 authors (ranked 21 to 40, when statistically adjusted for fairness) were also invited to Discord, and we’ve also invited them to potentially be involved in a second anthology. Depending on how many of them say yes, we may be able to open that opportunity up a bit more, but we’re not sure yet. Two of these people, by raw score, would have made the top 20, and three of these people, by raw score, would not have made the top 40. However, again, when we standardized the data to reflect the differences in rater strictness, these were the results.
That leaves another 62 people, who had various ratings all below those top 40. Some of them had a raw score fairly similar to their standardized score...and some of them didn’t. For example - my wife was an applicant to this, and she gave me permission to use her numbers for this example. By raw average? My wife came in 49th. However, one of her raters was someone who USUALLY rated very high, and gave hers a (relatively, compared to that person’s usual hig haverage) low score - when that was adjusted statistically, it caused my wife’s fic to plummet to 64th, because even though the raw number itself wasn’t bad, it was statistically well outside the norm for that rater. So, believe me when I say - these standardizations can make a big difference. If you, or anyone reading this, would like, I’m willing to send you what your standardized scores were (while still maintaining reviewer anonymity). I was originally thinking of adding them to the rubrics but doing so would have been a lot of work, and so I passed - next time we do this, they’ll probably be on there.
Thing the Third:
Often, what separated a fic that succeeded from a fic that didn’t was the range of raters scores. For example, the fic that ended up with the highest rating (by both calculations) wasn’t anyone’s favorite fic - but all three readers thought it was solid, and that was enough. My personal favorite fic? Didn’t even hit the Top 20. What often happened was -
Top fics: either all three people liked it pretty well, or one to two people adored it and the other person or people liked it well enough.
Second tier fics: either all three people thought it was okay, or one person loved it and two people were fairly meh about it.
Middle-range fics: either all three people thought it was average, or one person loved it and two people didn’t like it, or two people thought it was pretty good but one person hated it.
Lower quartile fics: either all three people didn’t think the fic was “up to snuff,” or one or two readers really hated it while a third thought it was average.
This isn’t universal, of course - but a fic that had one really high rating could easily do worse than a fic that had three so-so ratings, because...that’s how averages work. And that’s also why we had three readers for each - to try to even out some of the differences that would arise if someone had an extreme reaction to a fic that others didn’t. Obviously, it’s not a flawless system - no system was flawless - but with the resources and manpower we had, we thought this was a fair way to handle things, and we truly did our best. As soon as we broke 20 applications, we were never going to be able to accept everyone, and so we strived to create a transparent system that treated all of our applicants equally.
Thing the Fourth:
Now, in addition to the “why might scores be higher/lower” aspect of your question, there’s the aspect of “getting the Discord invite.” Now, the top 40 folks got Discord invites automatically, and those offers were based solely on the rating they received.
The other Discord invites that we sent out were not based on ratings alone! Just like we had a “reader subjective feelings” category on the rubric, when we’d finished rating all the stories, we were left with a conundrum - all of us had fics we liked that didn’t make the top 40. Maybe it was that “one person love it and two people didn’t like it” permutations. Maybe it was that all of us thought it was “good” but not “great.” Maybe some aspect of the story caught our eye. Based on our reactions, and the fics we saw that we wished had made it, we selected people to get invites. Those Discord invites were sent out based solely on subjective criteria.
Yes, we worried about doing this. Yes, we went back and forth about doing it at all. But in the end, what we decided was - we didn’t want to give Discord invites to everyone, because there were plenty of people we didn’t think their writing was quite ready yet - mentoring is an intensive prospect, and one for which we won’t get paid up front and might possibly never get paid, and while this all looks wonderful from the outside please do remember that we’re running a business - one that I’ve been working my ass off on for more than four months and have yet to earn enough to draw a single paycheck. So inviting everyone was never in the cards. And on the other hand, if we chose to give invites to no one, that would mean potentially having some people that caught our eye “slip through the cracks.” What if they got too discouraged to reapply? What if we missed the chance to work with them, after they’d impressed us?
To use an analogy - we saw something in everyone who applied, but in some it was “this is an uncut diamond, and we aren’t in the position to take it from raw to finished,” and in others it was, “this is a diamond with a crack, or a flaw, or a rough spot...and we think if we put in the work, we can get it to perfection.” And our verdict on the uncut diamonds isn’t, “this is uncut and it will never be cut,” it’s, “all of these diamonds have spent years honing themselves and working hard to strive for flawlessness, and but some are clearly farther along that journey than others. Once these uncut diamonds have shaped more of their rough edges themselves, we hope they’ll come back when they are ALSO only one flaw from perfection, and work with us then!”
There was no way for us to win, and there was also no fair way to distribute invites based solely on the raw scores, or even based solely on standardized scores, because some of the scores were sometimes not reflective our actual opinions of the writing. For example - if someone wrote a grammatically perfect story, with a compelling use of language, but the plot and characters were inaccessible to us because it required fandom knowledge we didn’t have, that might have scored very poorly, but we have every reason to think that if they’d chosen a different work that was more accessible they’d have done much better. Or, as another example - if someone’s writing was really sloppy, because of a lack of editing or possibly because English isn’t their native language - but they have a skill for creating characters, or setting a scene, or had excellent pacing - then again, they could have ended up with a score that didn’t reflect the actual potential that we saw in their work - using our judgement and expertise.
So, flat out - yes, there are inequalities in how the Discord invites were distributed to the 62 people who didn’t make the cut for either anthology. And yes, we agonized over whether to give them out at all. And no, I won’t swear that we always made the right choices - we were going by the one story submitted to us, and we had to use our best judgement based on what we were presented - what each applicant chose to submit. In the end, we invited the people who - regardless of their score - we personally thought were the closest to being sellable - in the sense of, “probably only missing one piece that would help with to get them from ‘didn’t make it’ to ‘now we’re talking.” And I truly, truly wish that we could have everyone. But if we spend all our time mentoring people, then we won’t have time for doing any of the other aspects of this business. We are not a writing school. We are a book publisher. This ISN’T just fandom, and I DO have to think about what is sellable and what isn’t, because in the end...I’m trying to make money, and pay my staff, and give our authors the highest royalties possible, and, and, and.
As a further note on this topic? We are still issuing new Discord invites, based on e-mail conversations we’re having with people. Several people who didn’t initially get those invites? Have now gotten them. It just depends on how people are responding to us, and the conversations we’re having, and lots of other factors.
And, as I tried to say in the post I put up earlier today about notifications: even the people who didn’t get invites have potential. Every single writer who applied has potential. All of you, even if you struggled with multiple areas, had some aspect in which you shined. In a perfect world, we would help you all. But this isn’t a perfect world, and I don’t have the hours in the day to bring up the people who aren’t already close, and I’m sorry about that. So, please, please - if you didn’t make it, don’t be discouraged, and don’t give up. You’re the only one who can tell your stories - if you don’t do it, no one else will. Find fandom friends who will give you honest critique. Learn to read your own writing with a critical eye. Track down stories that really speak to you, and read them like a writer - to see how the original writer put them together, and deliberately emulate what you thought worked. There are many, many ways to improve writing craft, and if ya’ll want to be published, either with us or with anyone, we strongly encourage you to examine whichever ones appeal to you and work for you.
There’s isn’t a single person who applied who couldn’t, one day, be published by Duck Prints Press.
We were never going to make everyone happy, however much we hate making people sad.
We did our best to make as many people as possible happy, while also doing what we feel to be best for Duck Prints Press.
If we hurt you - we’re sorry. We said in the application process that we’d be giving honest feedback, and we’ve never made a secret of the fact that this is a business and our goal is to publish books that sell - nor did we pretend that we’d be able to take more than 20, but we were so impressed by the quality of what we received that we did everything we could think of to open the doors to more folks, while still maintaining the core integrity of our business model. That means we have to narrow the pool; we can’t just take everyone, especially now, when we’re so small and new. Our desire to take as many people as possible is why anyone who wasn’t in the top 20 got an invite, and why we planned an entire second anthology on the fly, instead of no one below the top 20 getting anything except a rejection letter.
Now, as a final thing - it IS possible we made a mistake. We’ve spotted one big one already, and we’re working with that author to rectify the situation. If you truly believe we made a mistake, please e-mail us, and we can look into it. Our email is info @ duckprintspress dot com.
Sorry this got long - but I figured, if one person wondered this, others too, and as we have since day one - our goal is to be transparent, and so I thought it better to answer more thoroughly rather than less.
12 notes · View notes