Tumgik
#But there's a few instances that I think about in that theoretical lens (and probably a great many more that I don't even think of)
timbrrwolfe · 1 year
Text
Alright I'm /pretty/ sure I already talked about part of this on here (unless it's sitting in my drafts which is also very possible) but I figured I may as well expound on what I was alluding to in the tags of this post.
So in 6th grade I had a habit of forgetting my pencil. Because I had a habit of forgetting a lot of things. Because I had undiagnosed adhd. So, when i was in need of a pencil, I'd sometimes have to borrow one from a teacher. And my homeroom teacher had a policy of requiring collateral when borrowing stuff (which like. Fair enough if he's paying out of pocket for stuff). And in one of those instances I used the book I had out from the school library (Goosebumps: Why I'm Afraid of Bees iirc) as collateral. And then promptly forgot to return the pencil. And somewhere in my kid brain which was terrified of getting in trouble for things, I thought something along the lines of "Oh no, now that book is gonna be out forever and I'll have to pay *gasp* a fine or even for the entire book, I can never use the library again." Which was followed up (at some point) by my bad habit of losing my keys leading to someone finding them and using the library card on them to borrow a bunch of stuff they never took back (including, ironically, a goosebumps dvd we already owned and, for some reason, some book on Hitler). So I stopped using the library nearly as much for a while.
Anyway, in 8th grade I got into a situation that was a confused mess. At my lunch table we were doing some banter, and I was just staring to get comfortable in jumping into these situations (because, as it turns out, I also had undiagnosed autism. Which also explains a lot). Unfortunately, in this particular instance, I made a(n admittedly convoluted) jab at someone that essentially boiled down to calling them ugly. Something along the lines of "your face is like a car crash. Horrifying but I can't look away". Except that instead of my intended target, one of the girls at my table thought I was talking about her and started SCREAMING obscenities at me. At which point I just kinda put my head down instead of like. Trying to explain the situation or any other kind of response. So, because there had been enough of a scene made that the rest of the cafeteria went silent, the teachers on duty naturally came over to find out what happened before fists started potentially flying. And so after some discussion in which I did nothing to defend myself, I ended up getting punished. By....being forced to sit on the outer edge of the cafeteria instead of with friends at the table. But the confusion doesn't end there, no no. See, the teacher (who had been a teacher I'd had) had me pick a number between 1 and 10. And I chose 7. So I had to spend 7 days on...The Perimeter. Except that I wasn't...entirely sure whether that 7 days counted the rest of that lunch period or not. And it was a Friday so that completely threw off my understanding of how long my punishment was going to last. As a bonus, it turns out the punishment wasn't much of one. I didn't mind being on my own, and I even got to get up and get my lunch whenever I wanted, instead of waiting until it was my row's turn to get up and get food. So I just...stayed there for the rest of the year for lunch (at least, when I didn't have "lunch detention" for being late. Which was mostly only a punishment because it limited what I was allowed to eat because for some reason they only let lunch detention kids get the daily hoagie option? I dunno, it was a very strange system. Also, I digress).
My point in this whole story is that if I hadn't gotten spooked out of going to the library (both in-school and public) I probably would've read WAY more books sitting on The Perimeter for lunch for however long the rest of the year was. But instead I was mostly reading gaming magazines and game guides. Which, as an aside, I almost got more out of than playing the games themselves, depending on the game and guide in question. Like, Golden Sun was a pair of RPGs and (thanks to the aforementioned undiagnosed adhd) I did not have the attention span to play through them completely. Still haven't. Maybe someday. But I sure did read through the game guide a bunch. And spent a lot of time daydreaming about having different psynergy and using it in my day to day life. Which is why for a long time I considered Golden Sun one of my favorite games only to put 2 + 2 together when I was older and realize that a lot of my enjoyment was just in the daydreaming about it.
3 notes · View notes
livefromphilly · 5 years
Text
Fuji X100V Thoughts After 15 days
Tumblr media
MAY 2021 UPDATE: My camera suffered internal water damage so it is perhaps not as weather sealed as I originally thought. I shot it in the rain and snow a few times, certainly nothing worse than how I treat my Sony A7III, and apparently some water got inside and caused the camera to shut off frequently during operation. I sent it to Fuji for service and was told “Upon attempted repair of your X100V, internal liquid damage was found.  Attached are the images of the findings.  The camera is beyond economical repair.  The new estimate below is to swap the camera for a reconditioned X100V camera.” with the cost for the reconditioned camera coming to $1220. 
Since that’s pretty close to the cost of a new camera, I would probably just buy a new one and make sure I get an extended warranty that covers any sorts of damage. For now, I no longer own the camera. 
I would still probably recommend the camera but I think one has to look more closely at the X100F since used prices for that camera are so good. 
PROS:
The great design I praised in the last camera has been improved on a bit. I never noticed it until I got the X100V, but the X100F's from faux leather is different heights on either side of the lens. As nice as I thought hat camera looked, once I noticed it, it was hard to not think of it as very ugly compared to the V. They also redid the ISO dial, making it much easier to use that to change ISO, and they moved the drive button which is great because I hit that accidentally all the time. 
Performance is even better than what it was with the X100F and is another step up in the APS-C realm. High ISO on the 26-megapixel sensors seems ever so slightly better than on the 24-megapixel sensors, and highlights are easier to bring down. It's not as good as my A7III for really high ISO stuff but it's just fine in most instances. 
Remember what I said about the lens being not good wide open or up close? Well, the redesigned the lens and it's very good both wide open and up close. It seems sharper overall, which helps show more resolution with the 26-megapixel sensor. It seems like a higher megapixel increase than just two because of that. 
The autofocus speed doesn't seem much different, but it's far more reliable. This is especially noticeable when using eye detect or focusing in low light. 
The battery life improved a little bit. This feels more in line with the old Sony cameras in terms of drain, which I can more  than live with. 
It's finally weather sealed (when you add the filter adapter and a filter). [Ehhh...maybe not!]
Tumblr media
CONS:
They really should include the filter adapter and filter. Not doing that feels kinda cheap when you advertise this as weather sealed. 
They also no longer include a charger. I hate when companies do this (Sony also did this with their A7III, which feels cheap). 
The camera still uses a non-standard way of measuring ISO that isn't in line with any other camera manufacture. This means Fuji images look great at ISO 6400, but you'll need a low shutter speed than you would with a Sony camera at IS0 6400 to get the same exposure. 
Tumblr media
All in all this camera feels like a huge leap over the X100F, a camera which I had sort of a love-hate relationship with. Ok, love-hate is maybe harsh; it was more love-don'treallylove. Whereas with the F, there were ways that my other cameras were much better, the gap between the cameras has closed significantly to the point where I could theoretically just use this and be fine in most situations. 
This review got deleted but I saved the text. I had sample photos up which I will maybe add back at a later date. Pro-Tip: Don’t edit posts on the Tumblr app. Even just changing the tags erased the whole damn thing.
Tumblr media
SAMPLE PHOTOS:
Tumblr media
ISO 8000 | f/2 |  1/30 sec. | JPEG - Velvia Film Simulation 
Tumblr media
ISO 160 | f/5.6 | 1/30 sec | JPEG - Classic Chrome Film Simulation
Tumblr media
ISO 3200 | f/2.8 |  1/125 sec | JPEG - Classic Chrome Film Simulation
Tumblr media
 ISO 160 | f/4 | 1/400 sec. | JPEG - Velvia Film Simulation
Tumblr media
ISO 8000 | f/2 | 1/60 sec. | JPEG - Provia Film Simulation
Tumblr media
ISO 8000 | f/8 | 1/160 sec. | JPEG - Classic Chrome
28 notes · View notes
robininthelabyrinth · 6 years
Text
Fic: Teacher Teacher (ao3 link)
Fandom: Flash, Legends of Tomorrow Pairing: Barry Allen/Mick Rory/Leonard Snart Series: Flashwave Week 2018 (Destiny Series)
Summary: "I'm starting a school," Len says. "For magical creatures."
"So, like - Hogwarts?"
"No, not like Hogwarts, what do you think I am?"
"A nerd?"
A/N: @flashwaveweek - Flashwave Week: Supernatural AU
——————————————————————————————
"So I've decided to start something of a charity project," Len says.
"O-kay," Mick says slowly. "And?"
"Well, I'm going to need some help -"
"And I'm going to stop you right there. No."
"You don't even know what help I need."
"Boss," Mick says dryly. "I know you. I knew you when you were a kid. I knew you when you were a thief. I knew you when you were a supervillain. Do you really think that you've suddenly become a mystery just because you got magic powers and a book?"
"I didn't get magic powers," Len grumbles. "I got the powers of Destiny of the Endless. And, yes, it came with a book, I'll grant you that - the Book, even. But it's a sight more impressive than magic powers."
"Whatever. You're scheming, Len. Just because you went blind doesn't mean I suddenly have."
Len flips him off, which Mick supposes is fair.
"Can't you just trust that it's scheming that you'd like?" Len tries, like Mick's newly become an idiot or something, and Mick gives Len a look signifying what he thinks of that suggestion.
Len might be blind, but he knows Mick well enough to know what Mick's doing.
"Fine," Len says, rolling his eyes. Mick's still not used to them glowing inhuman blue like that. "Won't you at least hear me out?"
"What, and let you have a chance to use that silvertongue of yours to convince me?"
"Mick."
"Oh, fine. Hit me."
"A school," Len says. "For magical creatures."
"So, like - Hogwarts?"
"No, not like Hogwarts, what do you think I am?"
"A nerd?"
Len rolls his eyes. "I wanna teach 'em how to handle the modern world."
Despite himself, Mick's interest is piqued. "Don't they already?"
"No, most of 'em retreated instead. Various places: to Faerieland, to Dreamland, to Hell, to other realms -"
"Hell? You serious?"
"Mick," Len says, very steadily. "By chance do you remember hooking up with some guy with wings and a piano fetish?"
"Huh? Oh, yeah, sure. That was back when I was Kronos. What about it?"
"That was Lucifer."
"Yeah, he said -"
"No, Mick. The real Lucifer. That's why the Time Masters looked so surprised when you came back out alive and sane."
"...Oh. Huh. Say -"
"No, he's taken, or as much as practicable."
"Damn."
"Literally, in this instance."
Mick sniggers.
Len smirks.
"Okay," Mick says. "So this school. What were you thinking?"
"How to handle things in the modern day," Len says, brightening. "Basic things: trains, cars, electricity, music, basic conversations, cultural expectations -"
"Why, though?"
"Because if they get a basic primer in modern day human life, they can come back. All of them: fairies and vampires and brownies and werewolves and spirits of anything you like."
"And we...want that?"
"Of course we want that! Think of how much more interesting everything will be!"
"Yeah, and dangerous. Some of those things eat humans, don't they?"
"Mick. The guy who helped the Legends unleash literal demons and dragons and shit does not get to bitch about a couple of household spirits and a few bumps in the night."
"...we're gonna put 'em back eventually."
"All of 'em?"
"Most of 'em!"
"Even the dragons?"
"...I like the dragons."
"But Mick," Len says, opening his blind eyes wide. "Don't they sometimes eat humans?"
"Oh, all right, no need to get fucking shirty about it," Mick says. "I'll give you that this school of yours ain't a half-bad idea. But why should I help out?"
He doesn't ask why Len asked him. It doesn't matter how well-fit or not he is for a given task, Len always asks him; he's as necessary to Len as Len's right hand.
He learned that the hard way, in a shatter of bloody ice and a shout in a hoarse, pained voice. He's not going to forget it anytime soon.
He's a little concerned about what's going to happen at the end of his life, which is - as far as he knows - still a mortal one, while Len has taken on the mantle of the Endless, which implies something a little less limited, but he feels pretty sure than Len has something in mind to take care of that issue.
Len usually does. Scheming bastard.
He probably even has a plan to hook Mick up with someone similarly immortal just to make sure Mick agreed to immortality himself.
"- even putting aside how awesome it's going to be," Len is saying, "you should help because it'll help lots of people -"
Mick snorts.
"- and because I'm calling in my favor from Tulsa."
"...fuck." Len's always calling in some imagined favor or another to get Mick to do something that Mick would probably do anyway just because he's a sucker for Len asking him for things and always has been, but Tulsa is an actual favor that Mick owes, so clearly Len's really invested in this little project of his.
And, well, what the hell. Mick's not about to let Len out of his sight again anytime soon; he might as well do something worthwhile with his time, and this school of Len's seems as worthwhile as anything else.
Len is looking all hopeful, though, like he thinks there's a chance Mick might say no, sorry, I'm not doing this, I'd rather go off and keep up with the Legends - or the fire - or whatever.
The man literally became Destiny itself and he's still fucking insecure.
Probably just to fuck with Mick. Mick wouldn’t put it past him.
"Fine," Mick grumbles. "But you've got to make sure I get laid."
He doesn’t actually mean it. Len’s ideas for people Mick should hook-up with are universally godawful.
Well, Mick usually sleeps with ‘em anyway, and it’s usually the best sex of his life, but it doesn’t last or anything; no one who sees how co-dependent he and Len are ever agrees to make it last no matter how many times Mick explains that he’s not ever going to fuck Len, both because Len is ace and doesn't want to and also because Mick has been the other half of Len's brain so damn long that he can't see the man as attractive. No one ever believes him, even though it's true.
"I promise," Len says earnestly, which means he’s already planning something. For someone as disinterested in romance and sex as Len, he sure as fuck was interested in meddling in other peoples' love/sex lives.
Mick officially gives up, gives in, and - just for kicks - gives Len a nudge on the shoulder to indicate as much.
"Great!" Len exclaims. "I'll tell your co-teacher and you can get started right away."
"Hold up," Mick says. "Co-teacher?"
"Didn't I mention..?"
"No. You didn't. And you know it."
"Oh, well," Len says, utterly unapologetic. "Too bad you've already agreed."
Mick'd say he is gonna kill the little fucker, but that threat rather lost taste after the Oculus.
Although now that he thinks about it...
"Say," he says as fake-casually as he can manage. "This whole 'Endless' thing means you're immortal, right? Does that mean -"
"You theoretically could shoot me and I'd survive," Len agrees, because as much as Mick knows Len, Len also knows Mick and figured out exactly where he was going with that. "But then I wouldn't tell you anything."
"You wouldn't tell me anything anyway."
"Yeah, you're right, I wouldn't."
"Boss, your usual assholery aside, you can't just introduce me to some random person as a co-teacher; it'll be awkward as fuck!"
"Good point," Len says thoughtfully. "Well, at least it's someone you know."
Mick's about to ask for more information, but Len promptly disappears.
Fucking asshole.
Mick goes to find the school - it's not hard to find anything in Len's gardens, because almost by definition every pathway leads to where you want to go, it being the Garden of Destiny and all that - and he's expecting just about anything in his co-teacher, from one of the Legends to one of their old criminal co-workers to the homeroom teach he had a crush on as a kid, but somehow Len still manages to surprise him.
"What are you doing here?" he exclaims.
Barry Allen, the Flash, blinks up at him from the table. "Um," he says. "Apparently I'm - co-teaching in a school? According to Snart?"
"Why are you here instead of superheroing?" Mick clarifies.
Barry rubs his eyes. "I, uh - there was a disaster. To save the world, I ran into it and disappeared, leaving Wally to be the Flash for - a while. A fair long while. It was this or sit around in the Speed Force the entire time, and, well, this seemed – less awful. Speed Force is kinda creepy."
“…fair enough,” Mick says. He’d say he’s surprised, but actually that sort of disaster sounds just right up the Flash’s alley. He’s traveled with Wally on the Waverider, though; the kid’ll do a good job.
At the very least, he thinks to himself, this mean that he doesn’t have to worry about this being one of Len’s ill-thought-out hookup attempts – after all, he was just at Allen’s wedding, and the man was besotted.
It means he goes into this whole school thing unsuspicious.
Mick really ought to have known better than that.
The teaching itself goes great – he and Barry stay up late a few days with a pack of beer that seriously affects neither of them and they hammer out a curriculum of stuff that people pretending to be normal people should know, like basic social skill rules (when they ask “what’s up” or “how are you”, the answer is “good” even if it really isn’t), rules of the road (stop at stop signs when driving, you let the people in the train get out before you get in, and don’t hog the whole damn sidewalk when you’re with a group), and miscellaneous stuff (don’t put dish detergent in your washing machine, always tip hotel people and waiters if you’re in America and check otherwise, ask before petting the dog and never if they have a sign indicating they’re working, etc.).
Actually teaching the class itself...that’s fun, too. Mick’s never been up-to-date on his mythology and folklore, but he gets a crash course in a whole bunch of different types of magical beasties and their myriad likes and dislikes, and also how to deal with deflecting attention about them in the modern day.
Burned by silver? Say you’re allergic, people will be sympathetic.
Carnivore species? Say you’re anemic and need to stock up your iron, and anyway you’ve always hated [insert vegetable here] ever since you were a kid.
Otherwise limited ability to eat various food? You’re on the new [make up name here] diet and you can’t eat any of this, sorry.
Unable to stand daylight? You’re a computer programmer who keeps weird hours.
Can’t conduct electricity sufficiently to use touchscreens? They make touchscreen-friendly gloves now; get a pair of those and bitch about your “unusually dry skin” the rest of the time.
You’re a persnickety fucking fairy that can’t sign off on anything without reading the fine print? Congrats, you’re a lawyer.
Unbearable desire to count things? You have OCD.
Can’t pass running water without being shipped in a box with earth? Take a potted plant with you and travel via a subway car; that’s box-like enough.
In other words, Mick likes it. He likes teaching, he likes the school, he likes the students - damnit, he likes it.
He even likes the idea of introducing all these magical creatures back into the world.
Sure, the students sometimes try to kill him and Barry, their nature being what it is, but really, that's just a good reminder to keep them on their toes.
And working with Barry, that’s fun, too. He’s more sharp-tongued and cynical than Mick remembered, and he’s clever and funny and he’s got a bone-deep optimism that’s been tempered but is still unshakeable. Honestly, all around, he’s just more mature than Mick recalls him being when they fought him or at the wedding or at the alien invasion – less prone to drama, more contemplative, and patient with problems.
Mick likes him.
He really likes him.
And he goes along thinking that it’s all well and good to have a crush on someone unavailable to keep him busy (what with Len’s proposed hook-up having yet to appear) right up until the moment when they’re working on grading late at night, laughing at some of the weirder answers (kitsune, man, they’re wild) and then suddenly Barry is reaching over and pulling Mick in and they’re kissing.
It’s very, very nice for the approximately fifteen seconds before Mick’s brain reboots.
Okay, yes, he still waits thirty seconds before breaking the kiss.
Mick’s never claimed to be a good man.
“Red,” he says gently.
“Did I misread this?” Barry asks. “I apologize if I had. I thought you were interested.”
“I am, you didn’t misread that. But for all the things I’ve done, I’m still not a home-wrecker.”
Barry frowns. “Beg pardon?”
“I don’t do infidelity,” Mick explains.
Barry just looks more confused. “Do you mean – I thought you and Len weren’t together?”
“We’re not!” Mick exclaims automatically.
“Then – who…?”
Mick frowns back at Barry. “Why do you think I’m talking about me? You’re the one who’s married. Iris West-Allen, remember? You only talk about her every ten minutes.”
“Only about as often as you talk about Len,” Barry points out, which is true but irrelevant.
“Well, yeah,” Mick says, “but unlike me and Len, I saw you marry her.”
“Well, yeah, and then divorce her.”
“What, seriously?!”
“Yeah,” Barry says, looking bemused. “Two years ago, now.”
“Two – you weren’t even married two years ago! You got married two months ago!”
More like seven months, but who was counting?
Unless...
“What year are you from?” they both demand at once.
Turns out Barry’s nearly nine years in Mick’s future.
No wonder he’s more mature.
He and Iris are still best friends, apparently; they’ve just fallen into more of a Mick-and-Len co-dependent dysfunctional assholes routine than a proper marriage, and anyway there’d been some complications with people coming back from the dead and Barry spending time in space and whatnot so they’d realized they’d be better friends when they weren’t married. After some heartbreak and routine-adjustment, Barry set out fully intent on dating again, but he's been running into the same problem as Mick: no one believes that he's not hung up on Iris because he still talks to her all the time, even though he really isn't.
For Barry’s part, he hasn’t seen Mick since Mick went off into the timeline.
And that means they’re potentially from the same timeline.
And, apparently, both single.
“Oh,” Mick says.
“Yeah,” Barry says.
“Huh,” Mick says.
“So...” Barry says.
“I’m going to kill Len,” Mick says conversationally.
“Why?”
“He set me up. He always sets me up. Except it never works!”
Barry frowns.
“Not you,” Mick assures him. “You work just fine.”
“Maybe he’s gotten better at it now that he’s, well, uh, Destiny of the Endless?”
“...maybe.”
“Definitely,” Len says, popping out from literally nowhere behind them. “You two could be great for each other. Even I can see it, and I’m blind!”
“Literally no one is ever going to buy that line from you ever again,” Mick says. “You have a giant glowing book containing everything ever.”
“Is this destined?” Barry asks. His eyes narrow. “Did you make sure it was?”
“No, of course not,” Len says briskly. “I believe in free will, I don’t read ahead for my friends – or enemies – because it’s no fun, and anyway, I’m the Reader of Destiny, not the maker of it. Your destiny is in your own hands. Lower case destiny, Mick, stop grabbing at me, I don’t care how good a pun it is.”
Mick sits back down.
Not his fault that some of Len’s awful sense of humor has stuck over the years.
“Besides, everyone in the school is betting on when you’ll hook up,” Len says unhelpfully.
“Including you, huh? Setting us up for a big payday?” Mick asks, mostly nostalgically. Len liked to do that sometimes when they were going somewhere new.
“No,” Len says, surprising him. Though all is explained when he adds, with a scowl, “None of ‘em will bet with me.”
“To be fair,” Barry says, barely hiding a smile. “Book, everything ever, kinda a gimme there.”
“Spoilsports, all of ‘em.”
“There, there,” Mick says unsympathetically. “You can always con the regular suckers.”
“Conning the regular suckers is boring.”
“Con the supervillains,” Barry suggests.
Len looks intrigued by that idea.
“Aren’t you not supposed to interfere or something?” Mick asks.
Len shrugs. “Destiny sometimes requires activity. Now, getting back to the more important part, kiss already.”
They both glance at each other, then glance at Len meaningfully.
“...what?”
“Go away, maybe?” Barry suggests.
“But you haven’t kissed yet.”
“Maybe we’re waiting for you to leave. Ever thought of that, genius?”
Len frowns. “But I put in all that work to get you two together! I deserve to see the payoff!”
“Boss. Go away.”
“But –”
“Boss. You promised me you’d get me laid. Stop getting in the way.”
Len departs, grumbling.
“You know he’s just going to read along, right?” Barry asks, his suppressed laughter bubbling through as he speaks.
“Yeah,” Mick says, “I know. But at least he’s not actually here while I do this.”
He pulls Barry in for another kiss, Barry smiling the whole while as he does.
Maybe this school thing wasn’t as bad an idea as all that.
“Professors, I have a question –” one of their ghost students asks, floating through the wall and freezing when they sees what’s going on. “Never mind! I’ll just go now!”
And then they turned tail and dashed out, shouting, “They’ve done it! It’s happened!”
“That,” Barry says, very steadily, “was Snart’s fault, wasn’t it?”
“Yep.”
“Not via his new Destiny powers.”
“Nope, no need. Probably just tipped off a student on his way out of here.”
“Iris would’ve done the same thing,” Barry observes.
Mick thinks back to his interactions with her. “Yeah. Probably.”
They share a look of perfect understanding. Platonic soulmates, what can you do - can't live with 'em, can't live without 'em.
“Wanna move this somewhere a bit more private than our offices?” Mick asks.
The world spins, lit up by sudden lightning, and they’re in Barry’s bedroom.
Mick grins. “I take that as a yes...”
26 notes · View notes
Text
talking out loud about my british novel class
OKAY SO. the class itself is a survey of 20th century British literature that fulfills one of their distribution requirements for the major. my angle is that we’re going to reading British literary culture through the lens of “the bestseller” from the 1930s to present, focusing mostly on popular novels that have become cult classics or beloved favorites for various reasons. the novels span a range of popular genres (romance, thrillers, speculative fiction, some very light horror/suspense, and possibly crime fiction). secondary readings are going to be a nice blend of historical, theoretical, and (I think) reception or fan-related stuff.
I am going to take a first stab at articulating some of the course goals, questions, and outcomes below the cut!
I want them to get some experience working with popular reviews and essays by early and mid-20th century readers. We have access to a few different online archives (Times Literary Supplement, the Vogue archives, etc.). I also want them to also read some recent scholarship about women’s magazines or “lifestyle magazines” and the role they played in promoting work by women writers, remediating high modernist work for a mainstream female readership, providing forums for women readers to discuss books or engage with authors, etc. We are also going to look at some scathing critiques of these cultures.  
we are REALLY going to dig into the gender and class politics of reading in the British context, especially anxieties around popular culture, genre fiction, the bestseller, and the “feminine middlebrow.” We’re going to look at some of Nicola Humble’s scholarship on “middlebrow reading postures,” Christopher Hilliard’s work on the democratization of British culture in the 1930s-50s (the rise of amateur writing circles, adult education programs, book clubs, etc.), and essays by women writers about reading. I want them to understand that literary reputation and judgments about literary quality often tell us less about the work itself than about who made it and who read it (and also where they read it, why they picked it up, how they engaged with what they read, etc.). this is probably the main underlying theme of the class. another implicit theme is subject matter -- esp since a lot of the women writers we’re reading are using popular genre forms to explore issues connected to gender, sexuality, family responsibilities, power dynamics in relationship, etc.
I also want them to be able to connect some of this earlier stuff (on 1930s-50s reception, literary institutions, cultures of reading) to more contemporary cultures of reading. For instance, I’m trying to think about how I might incorporate stuff from GoodReads, Instagram, websites like The Toast and The Hairpin (RIP), and even fandom stuff into the readings or assignment options. i want to get them thinking about what middlebrow/popular cultures of reading look like in the present and how those platforms are being used to create new communities of popular readers (or sustain/renew interest in older works). I think one of the course’s underlying thematic threads is the idea that popular reading = social reading, ie reading in community or as part of a vibrant, geographically dispersed network of readers. which, if we go a step further, might also get us thinking about how these conceptions of reading as a social, networked activity rather than a solitary pursuit might complicate some of our traditional disciplinary conceptions about Literature, authorship and creative authority, the Isolated (Male) Genius, the Classic and the Canon, the role of the critic in interpreting texts, etc.
I feel like if I present it in the right way, we could also do some really cool reading/thinking around the history of English as a discipline (possibly connected to the politics of reading stuff). idk gotta be careful because they won’t all be English majors (and not everybody finds the meta-thinking-about-the-discipline stuff as EXHILARATING as I do), but I feel like there’s soooooooooo much rich subject matter here, and it could be a good way to connect it to their own histories as readers. in the past students have been really into discussing their own literary educations and reflecting on when/where they fell out of love with reading (especially if they were avid childhood readers who kinda fell off the wagon).
MY TRUE SECRET SUPERVILLAIN PLAN IS TO GET THEM TO GENUINELY ENJOY AT LEAST ONE NOVEL THEY READ!!!!!! IDEALLY MOST OF THE NOVELS THEY READ!!! I want to create lots of positive associations with reading and discussing books. I want the class to be super joyful, fun, exhilarating, and social. I want them to have a really good time reading novels, because 1) fostering genuine pleasure and delight in reading is the only way to create lifelong readers and 2) a strong base of positive associations encourages readers to take on things that are harder or out of their comfort zone. and I want to show them how some of the tools literary studies offers or the skills it cultivates can actually enhance and enrich their reading experiences, deepening the pleasure and delight they are able to take in engaging with books!!! WILL WE HAVE TIME FOR ALL THIS??? IS IT POSSIBLE??? WHO KNOWS!!! BUT LET’S TRY!!!!!!
3 notes · View notes
s-c-i-guy · 7 years
Photo
Tumblr media
Physicists Find a Way to See the ‘Grin’ of Quantum Gravity
A recently proposed experiment would confirm that gravity is a quantum force.
In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.
Bronstein figured out how to describe gravity in terms of quantized particles, now called gravitons, but only when the force of gravity is weak — that is (in general relativity), when the space-time fabric is so weakly curved that it can be approximated as flat. When gravity is strong, “the situation is quite different,” he wrote. “Without a deep revision of classical notions, it seems hardly possible to extend the quantum theory of gravity also to this domain.”
His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics. Perhaps, given the chance, the whip-smart Bronstein might have helped to speed things along. Aside from quantum gravity, he contributed to astrophysics and cosmology, semiconductor theory, and quantum electrodynamics, and he also wrote several science books for children, before being caught up in Stalin’s Great Purge and executed in 1938, at the age of 31.
The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.
The problem is gravity’s extreme weakness. Whereas the quantized particles that convey the strong, weak and electromagnetic forces are so powerful that they tightly bind matter into atoms, and can be studied in tabletop experiments, gravitons are individually so weak that laboratories have no hope of detecting them. To detect a graviton with high probability, a particle detector would have to be so huge and massive that it would collapse into a black hole. This weakness is why it takes an astronomical accumulation of mass to gravitationally influence other massive bodies, and why we only see gravity writ large.
Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity — where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed — always hide behind the horizons of black holes.
“Even a few years ago it was a generic consensus that, most likely, it’s not even conceivably possible to measure quantization of the gravitational field in any way,” said Igor Pikovski, a theoretical physicist at Harvard University.
Now, a pair of papers recently published in Physical Review Letters has changed the calculus. The papers contend that it’s possible to access quantum gravity after all — while learning nothing about it. The papers, written by Sougato Bose at University College London and nine collaborators and by Chiara Marletto and Vlatko Vedral at the University of Oxford, propose a technically challenging, but feasible, tabletop experiment that could confirm that gravity is a quantum force like all the rest, without ever detecting a graviton. Miles Blencowe, a quantum physicist at Dartmouth College who was not involved in the work, said the experiment would detect a sure sign of otherwise invisible quantum gravity — the “grin of the Cheshire cat.”
Tumblr media
A levitating microdiamond (green dot) in Gavin Morley’s lab at the University of Warwick, in front of the lens used to trap the diamond with light.
The proposed experiment will determine whether two objects — Bose’s group plans to use a pair of microdiamonds — can become quantum-mechanically entangled with each other through their mutual gravitational attraction. Entanglement is a quantum phenomenon in which particles become inseparably entwined, sharing a single physical description that specifies their possible combined states. (The coexistence of different possible states, called a “superposition,” is the hallmark of quantum systems.) For example, an entangled pair of particles might exist in a superposition in which there’s a 50 percent chance that the “spin” of particle A points upward and B’s points downward, and a 50 percent chance of the reverse. There’s no telling in advance which outcome you’ll get when you measure the particles’ spin directions, but you can be sure they’ll point opposite ways.
The authors argue that the two objects in their proposed experiment can become entangled with each other in this way only if the force that acts between them — in this case, gravity — is a quantum interaction, mediated by gravitons that can maintain quantum superpositions. “If you can do the experiment and you get entanglement, then according to those papers, you have to conclude that gravity is quantized,” Blencowe explained.
To Entangle a Diamond
Quantum gravity is so imperceptible that some researchers have questioned whether it even exists. The venerable mathematical physicist Freeman Dyson, 94, has argued since 2001 that the universe might sustain a kind of “dualistic” description, where “the gravitational field described by Einstein’s theory of general relativity is a purely classical field without any quantum behavior,” as he wrote that year in The New York Review of Books, even though all the matter within this smooth space-time continuum is quantized into particles that obey probabilistic rules.
Dyson, who helped develop quantum electrodynamics (the theory of interactions beween matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.
He is not the only skeptic. The renowned British physicist Sir Roger Penrose and, independently, the Hungarian researcher Lajos Diósi have hypothesized that space-time cannot maintain superpositions. They argue that its smooth, solid, fundamentally classical nature prevents it from curving in two different possible ways at once — and that its rigidity is exactly what causes superpositions of quantum systems like electrons and photons to collapse. This “gravitational decoherence,” in their view, gives rise to the single, rock-solid, classical reality experienced at macroscopic scales.
The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.
Bose’s and Marletto’s proposals appeared simultaneously mostly by chance, though experts said they reflect the zeitgeist. Experimental quantum physics labs around the world are putting ever-larger microscopic objects into quantum superpositions and streamlining protocols for testing whether two quantum systems are entangled. The proposed experiment will have to combine these procedures while requiring further improvements in scale and sensitivity; it could take a decade or more to pull it off. “But there are no physical roadblocks,” said Pikovski, who also studies how laboratory experiments might probe gravitational phenomena. “I think it’s challenging, but I don’t think it’s impossible.”
The plan is laid out in greater detail in the paper by Bose and co-authors — an Ocean’s Elevencast of experts for different steps of the proposal. In his lab at the University of Warwick, for instance, co-author Gavin Morley is working on step one, attempting to put a microdiamond in a quantum superposition of two locations. To do this, he’ll embed a nitrogen atom in the microdiamond, next to a vacancy in the diamond’s structure, and zap it with a microwave pulse. An electron orbiting the nitrogen-vacancy system both absorbs the light and doesn’t, and the system enters a quantum superposition of two spin directions — up and down — like a spinning top that has some probability of spinning clockwise and some chance of spinning counterclockwise. The microdiamond, laden with this superposed spin, is subjected to a magnetic field, which makes up-spins move left while down-spins go right. The diamond itself therefore splits into a superposition of two trajectories.
In the full experiment, the researchers must do all this to two diamonds — a blue one and a red one, say — suspended next to each other inside an ultracold vacuum. When the trap holding them is switched off, the two microdiamonds, each in a superposition of two locations, fall vertically through the vacuum. As they fall, the diamonds feel each other’s gravity. But how strong is their gravitational attraction?
If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.
In each case, the different degrees of gravitational attraction affect the evolving components of the diamonds’ superpositions. The two diamonds become interdependent, meaning that their states can only be specified in combination — if this, then that — so that, in the end, the spin directions of their two nitrogen-vacancy systems will be correlated.
Tumblr media
After the microdiamonds have fallen side by side for about three seconds — enough time to become entangled by each other’s gravity — they then pass through another magnetic field that brings the branches of each superposition back together. The last step of the experiment is an “entanglement witness” protocol developed by the Dutch physicist Barbara Terhal and others: The blue and red diamonds enter separate devices that measure the spin directions of their nitrogen-vacancy systems. (Measurement causes superpositions to collapse into definite states.) The two outcomes are then compared. By running the whole experiment over and over and comparing many pairs of spin measurements, the researchers can determine whether the spins of the two quantum systems are correlated with each other more often than a known upper bound for objects that aren’t quantum-mechanically entangled. In that case, it would follow that gravity does entangle the diamonds and can sustain superpositions.
“What’s beautiful about the arguments is that you don’t really need to know what the quantum theory is, specifically,” Blencowe said. “All you have to say is there has to be some quantum aspect to this field that mediates the force between the two particles.”
Technical challenges abound. The largest object that’s been put in a superposition of two locations before is an 800-atom molecule. Each microdiamond contains more than 100 billion carbon atoms — enough to muster a sufficient gravitational force. Unearthing its quantum-mechanical character will require colder temperatures, a higher vacuum and finer control. “So much of the work is getting this initial superposition up and running,” said Peter Barker, a member of the experimental team based at UCL who is improving methods for laser-cooling and trapping the microdiamonds. If it can be done with one diamond, Bose added, “then two doesn’t make much of a difference.”
Why Gravity Is Unique
Quantum gravity researchers do not doubt that gravity is a quantum interaction, capable of inducing entanglement. Certainly, gravity is special in some ways, and there’s much to figure out about the origin of space and time, but quantum mechanics must be involved, they say. “It doesn’t really make much sense to try to have a theory in which the rest of physics is quantum and gravity is classical,” said Daniel Harlow, a quantum gravity researcher at the Massachusetts Institute of Technology. The theoretical arguments against mixed quantum-classical models are strong (though not conclusive).
On the other hand, theorists have been wrong before, Harlow noted: “So if you can check, why not? If that will shut up these people” — meaning people who question gravity’s quantumness — “that’s great.”
Dyson wrote in an email, after reading the PRL papers, “The proposed experiment is certainly of great interest and worth performing with real quantum systems.” However, he said the authors’ way of thinking about quantum fields differs from his. “It is not clear to me whether [the experiment] would settle the question whether quantum gravity exists,” he wrote. “The question that I have been asking, whether a single graviton is observable, is a different question and may turn out to have a different answer.”
In fact, the way Bose, Marletto and their co-authors think about quantized gravity derives from how Bronstein first conceived of it in 1935. (Dyson called Bronstein’s paper “a beautiful piece of work” that he had not seen before.) In particular, Bronstein showed that the weak gravity produced by a small mass can be approximated by Newton’s law of gravity. (This is the force that acts between the microdiamond superpositions.) According to Blencowe, weak quantized-gravity calculations haven’t been developed much, despite being arguably more physically relevant than the physics of black holes or the Big Bang. He hopes the new experimental proposal will spur theorists to find out whether there are any subtle corrections to the Newtonian approximation that future tabletop experiments might be able to probe.
Leonard Susskind, a prominent quantum gravity and string theorist at Stanford University, saw value in carrying out the proposed experiment because “it provides an observation of gravity in a new range of masses and distances.” But he and other researchers emphasized that microdiamonds cannot reveal anything about the full theory of quantum gravity or space-time. He and his colleagues want to understand what happens at the center of a black hole, and at the moment of the Big Bang.
Perhaps one clue as to why it is so much harder to quantize gravity than everything else is that other force fields in nature exhibit a feature called “locality”: The quantum particles in one region of the field (photons in the electromagnetic field, for instance) are “independent of the physical entities in some other region of space,” said Mark Van Raamsdonk, a quantum gravity theorist at the University of British Columbia. But “there’s at least a bunch of theoretical evidence that that’s not how gravity works.”
In the best toy models of quantum gravity (which have space-time geometries that are simpler than those of the real universe), it isn’t possible to assume that the bendy space-time fabric subdivides into independent 3-D pieces, Van Raamsdonk said. Instead, modern theory suggests that the underlying, fundamental constituents of space “are organized more in a 2-D way.” The space-time fabric might be like a hologram, or a video game: “Even though the picture is three-dimensional, the information is stored in some two-dimensional computer chip,” he said. In that case, the 3-D world is illusory in the sense that different parts of it aren’t all that independent. In the video-game analogy, a handful of bits stored in the 2-D chip might encode global features of the game’s universe.
The distinction matters when you try to construct a quantum theory of gravity. The usual approach to quantizing something is to identify its independent parts — particles, say — and then apply quantum mechanics to them. But if you don’t identify the correct constituents, you get the wrong equations. Directly quantizing 3-D space, as Bronstein did, works to some extent for weak gravity, but the method fails when space-time is highly curved.
Witnessing the “grin” of quantum gravity would help motivate these abstract lines of reasoning, some experts said. After all, even the most sensible theoretical arguments for the existence of quantum gravity lack the gravitas of experimental facts. When Van Raamsdonk explains his research in a colloquium or conversation, he said, he usually has to start by saying that gravity needs to be reconciled with quantum mechanics because the classical space-time description fails for black holes and the Big Bang, and in thought experiments about particles colliding at unreachably high energies. “But if you could just do this simple experiment and get the result that shows you that the gravitational field was actually in a superposition,” he said, then the reason the classical description falls short would be self-evident: “Because there’s this experiment that suggests gravity is quantum.”
49 notes · View notes
tomfooleryprime · 7 years
Photo
Tumblr media Tumblr media Tumblr media Tumblr media
Starfleet’s moral relativism problem: is it ever okay to condemn another culture?
Central to all of Star Trek has always been the Prime Directive – that set of rules that governs our intrepid space explorers from Captain Kirk to Captain Janeway and everyone in between. Poor Captain Archer existed in a time before, and I’ve often pitied him for having to shoulder the burden of having to make some really questionable ethical decisions without having a Prime Directive to shift the blame to when it turned out his decisions really sucked.
At its core, the Prime Directive dictates that Starfleet cannot interfere with the internal affairs or development of alien civilizations. Some of the best Star Trek episodes involved our heroes clashing with the ethics of a rigid application of this doctrine, but there was always one implication of the Prime Directive that bothered me – the idea that we shouldn’t judge other cultures through the lens of our own because who’s to say what’s right and what’s wrong?
This philosophy of moral relativism argues that there are no universal moral standards – sentient beings are completely at the mercy of their own societies to impart a code of moral behavior and whatever it comes up with is “good enough.” There may be common themes among many societies in terms of morals – most seem to agree it is wrong to commit murder, for instance – but ultimately, what is “right” according one society is not guaranteed to be “right” for another. And let’s be honest with ourselves – even with the topic of murder, we still fiercely debate exceptions to the “no murder” rule such as war, capital punishment, or self-defense, not to mention we have heated arguments over what even constitutes murder when we discuss issues of abortion or animal agriculture.
Our own society provides an incredible patchwork of thorny moral and ethical issues that we still have yet to decide upon. We debate things like abortion, torture, slavery, free speech, and more. We probe these issues by asking ourselves questions like, “At what point does life truly begin?” and “Is torture ever justified?” We explore them by posing philosophical experiments like the Trolley Problem and asking ourselves whether it is morally acceptable to kill one person to save the lives of two or more others.
But at the end of the day, might (in terms of numbers) makes right in moral relativism. While I don’t subscribe to that theory, there are times when our beloved Star Trek characters do under the guise of defending the Prime Directive. On the surface, it sounds very peaceful and anti-colonialist. After centuries of watching many empires from the Romans to the British set fire to cultural diversity – and given arguments that many Western nations continue to do this today, just without being quite as invadey – this sounds like a nice change of pace. Live and let live. But this also creates a mind-boggling acceptance of suffering, genocide, exploitation, and oppression within Starfleet.
One of the first chronological examples of the faults of moral relativism is found in the Star Trek: Enterprise episode, “Cogenitor.” Archer and his crew meet an affable, three-gendered species called the Vissians, but we quickly learn that only two of the society’s genders have any real rights. The third gender is referred to as a “cogenitor,” and Trip Tucker ends up on Captain Archer’s shit list for teaching it how to read and putting ideas in its head. When the cogenitor later begs for asylum, Archer refuses. It gets worse – the cogenitor is sent back to the people who basically treat it as chattel and commits suicide, and Archer points out that Tucker’s interference led to its death and will mean the Vissian couple will probably never get to have a child. No winners in this ethical dilemma of an episode, only losers. Until you remember none of this would have happened in the first place if the Vissians had just treated the cogenitors like people.
In the Star Trek: The Next Generation episode, “Angel One,” we encounter the cringe worthy society of Angel I, a planet of misandric women who oppress men. We all got a few giggles at the ladies of Enterprise-D being suddenly held in higher regard than their male counterparts, but things get very dark when Beata, the Elected One of Angel I, decides some dudes need to die for spreading heretical teachings that imply men are equal to women. We get a sort of cop out solution in which Beata has a change of heart and decides to banish rather than execute these “heretics” after Riker makes an impassioned speech about basic rights, but Riker was more than willing to let things go bad if need be, because, “The Prime Directive” and “Just because I don’t like it doesn’t mean it’s wrong.” In another Star Trek: The Next Generation episode, “Symbiosis,” we’re introduced to the Ornarans and Brekkians and we find out that after an ancient plague, the Brekkians started peddling an expensive and addictive drug to the Ornarans and calling it a “treatment.” There’s no plague anymore – the Brekkians just control the Ornarans through their drug addiction. Dr. Crusher finds a way to synthesize this drug and offers to help wean the Ornarans off their addiction, but what does Captain Picard do? He tells her to mind her own damn business because it’s not the Federation’s place to tell the Brekkians that it’s wrong to deceive and enslave the Ornarans through an addictive drug. And this is the most uncomfortable part of moral relativism – who gets to draw the line and where do we draw it? On one end of the spectrum, we have moral relativism which claims anything goes – societies should be able to torture animals, employ the slave labor of children, and oppress women as they see fit – just as long as enough people agree it isn’t wrong to do so. At the other end of the spectrum sits moral absolutism, a theoretical construct that would result in a perfectly unified, homogenous culture, but one that would also strip away many facets of culture that lead to human diversity.
If Star Trek is supposed to serve as a guide for how we might become a more progressive society, it does a terrible job a lot of the time. Now, there are many instances of our protagonists saying “to hell with the Prime Directive!” and taking what most of us would agree is the more morally praiseworthy route. But there’s no rhyme or reason to it. Just look at how they treat the Borg. Why is it okay to let some societies oppress men or drug another species into submission but it’s not okay to let the Borg assimilate the galaxy in their ultimate quest for perfection?
I’m going to guess the answer is that until the Borg decided to stick nanoprobes in a Federation citizen, the cheerful little robots simply weren’t the Federation’s problem. We might argue that the Prime Directive certainly has provisions for self-defense — how ridiculous would it be to consent to being annihilated or assimilated just because the Federation is afraid of offending another culture and refuses to draw a line in the sand where right stops and wrong starts? The slope gets slippery here though.
We could say this mirrors the concept of large Western nations trying to police the rest of the world and impose their customs on other societies - but how many of us watched documentaries about the Holocaust in school and wondered why the hell previous generations allowed shit to get that bad? How many of us continue to stand by while people in Iraq and Syria live under the threat of the Islamic State? I doubt most people even realize what’s going on in the Philippines or Venezuela right now because hey, “Not my country, not my problem.”
It is a huge gray area for what constitutes forcing certain customs on unwilling societies and trying to genuinely help people, but if we can’t agree that Nazi extermination camps and religiously motivated beheadings are bad and need to stop (even when they aren’t happening to us personally), I’ll be surprised if we ever make to the 24th century. It makes me wonder how exactly Earth “solved its problems” and created a utopian society in the first place with this attitude of moral relativism.
Let’s face it – we have no shortage of modern travesties that sound ridiculous in the context of this philosophical approach. The Chechen Republic has been reportedly rounding up gay men and torturing them in recent months, and moral relativism would have us shrug and say, “But their culture says homosexuality is a sin.”
Bacha bazi, a practice where adolescent boys are groomed for sexual relationships with older men, remains pervasive in many Pashtun societies. Moral relativism would tell us that we shouldn’t condemn predatory pedophilia because to do so would mean unfairly imposing our Western beliefs on their culture.
I could keep going on, but this post is already long enough. The bottom line is, all too often, Star Trek lazily glosses over a lot of moral and ethical dilemmas by using the argument, “Who are we to judge?” June is Pride Month, and in honor of LGBT individuals all over the globe who all too often have less rights than their cisgender heterosexual counterparts, maybe we should avoid looking to the “progressive” future of Star Trek and instead ask the question, “Who are we to not judge?”
While I can’t resolve one of the greatest philosophical questions ever devised, someone once gave me a great piece of advice that I think applies to this idea of moral relativism: no person’s belief is inherently worthy of respect, but every person is.
778 notes · View notes
edwardsg491-blog · 5 years
Text
Ethical Issues with Autonomous Cars (02/03)
Note: Date formatted in MM/DD/YYYY
This is the second of two professor-led, in-class discussions for this semester. Beginning the following week, we will have student-led discussions where members of the senior seminar class form groups and guide their classmates through a discussion in whatever way they see fit. This week, we talked about ethical decision making regarding autonomous cars.
Most of the discussion that we focused on was not about whether the usage of the cars was ethical. At one point when discussing responsibility and blame, we did briefly discuss that who was in control of the vehicle during an accident does seem to affect who may be deemed morally responsible. But we only mentioned comparing autonomous cars’ accuracy to that of human accuracy. This certainly is a topic that should be focused on in circles discussing autonomous vehicles or the automation of general transportation. Since we are really only on the cusp of this technology, there is a lot of attention and scrutiny surrounding accidents involving automated automotives and mechanical failures. There are statistics from the manufacturers of these vehicles about incidence rates, but even from just these few isolated incidents there is a lot talk about. For this discussion, we continued under the assumption that these vehicles have a significant increase in accuracy. 
In addition, we decidedly did not consider the ethical implications of the automation of vehicles leading to potentially new forms of terrorism, like car hacking. I’m glad we didn’t consider this given how tough it was for us to try to answer these other questions already. But this is certainly a topic worth discussion, and I would hope that this is a discussion being conducted in legislative and judicial circles. With every new technology comes to the possibility of malevolent use. Was it morally right to introduce gunpowder to armed forces? What about the Manhattan Project? I can sense that this will be a recurring (perhaps unspoken) undercurrent throughout the discussions that follow in this semester. We will be continually discussing new technologies and how they fit into society - which will always include considering the possibility of new kinds of crime.
Thus, the majority of our discussion centered around life-or-death incidents in which these programmed vehicles have to make ethical decisions that mirror similar ethical questions that have circulated for centuries. For instance, in a situation where a car must swerve and the only options are to hit a small girl or an old woman, what should the car do? In my opinion, this discussion did not prove very fruitful. I do think that it is important to observe these philosophical questions in a modern lens in a classroom setting where the chance for multicultural perspective sharing is high. But I will not pretend that this discussion was not merely a class about the Trolley Problem in a 21st-century disguise. Much of the reasoning provided was circular and repetitive because of uncertainty (and possibly even discomfort) with making these decisions that amount to universal handling of difficult situations.
Another assumption under which the discussion operated unknowingly until it was pointed out by the professor was that we were assuming that the AVs that are making these assessments and judgments bear the technological capabilities of doing so. Many of the situations which we described were split-second decisions, or decisions needed to be made far too fast for a rational human driver. To be able to assess these situations and compute an ethical result in time to enact an action that would actually bring about the result would require some incredibly fast and robust technology. AVs have come a long way since their first introduction, but we’re not entirely sure if these vehicles have the capability to do this just yet.
All in all, I think the majority of the discussions of these questions surmounted to reiterated variations of “we should treat all beings equally” and “who is responsible for making these choices?”. It went unsaid, but I think we all agreed that the Utilitarian approach is flawed here. I usually went a step further to present conflicting ethical considerations during our personal discussions of the questions. At one point we talked about using moral rules for the AV programming, and I countered that the universalization of most rules would likely lead to a logical contradiction. Take, for instance, the example from the reading that an autonomous vehicle would choose to hit a motorcyclist with their helmet on rather than one without a helmet. Should this rule always be applied, eventually this would likely lead to the result that motorcyclists no longer wear helmets, which is more unsafe for everyone regardless of the situation. This is probably going to be the same for pedestrians. One example we considered in class was if a group of pedestrians was violating the law by jaywalking, but the autonomous vehicle was programmed to minimize the loss of human life, then the car might veer off to avoid them but hit someone who was not violating the law and was, perhaps, just waiting for a bus on the sidewalk. Rules like these (minimizing the loss of human lives) will likely lead to socio-behavioral effects for pedestrians as well, like traveling in groups, since they would theoretically share the road with these autonomous vehicles.
One last thing that I would like to note (since this post is getting to be pretty long) is that I find it unrealistic to believe that any particular body of people should be responsible for these technologies. No matter who one considers, there are conflicting interests, varying moral implications, and a nonconsensual assignment of too much responsibility. For example, a business’s interests might misalign with that of legislation. A business might consider it too detrimental to crash into the stock market of Wall Street, but legislation might require an autonomous vehicle to preserve human life at all costs. Also, imagine a future where the programming of a vehicle’s “preferences” was conducted by the car owner. Now imagine the trauma introduced if that owner is involved in a crash. Does it seem like too much responsibility for a rational agent now?
The readings and preparation material from this round of discussion were:
Autonomous Driving: Technical, Legal and Social Aspects by Markus Maurer, Chris Gerdes, Barbara Lenz, and Hermann Winner, Chapter 4, “Why Ethics Matters for Autonomous Cars” (Peter Lin)
A Moral Map for AI Cars by Amy Maxmen, nature.com
The social dilemma of autonomous vehicles by Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan
The Pacing Problem, the Collingridge Dilemma & Technological Determinism by Adam Theirer
The questions covered during the class discussion were:
Should autonomous vehicles (AVs) use utilitarian models that will sometimes require sacrificing the vehicle's passengers? Would you want to own such a vehicle? Would you ride in one operated by a car service (e.g. Uber)?
In situations where an accident is inevitable, what factors should an AV use, or not use, when targeting the accident victims?
Assuming AVs will contain utilitarian ethical models, should they be determined by the manufacturer? Legislation? Owner tunable?
Thierer says "We shape our tools and then our tools shape us." What does he mean? Can you think of examples that fit this statement?
0 notes
Text
The future of photography is code
New Post has been published on https://photographyguideto.com/must-see/the-future-of-photography-is-code-2/
The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
See the new iPhone’s ‘focus pixels’ up close
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
A system to tell good fake bokeh from bad
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Read more: https://techcrunch.com
0 notes
theinvinciblenoob · 6 years
Link
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
See the new iPhone’s ‘focus pixels’ up close
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
A system to tell good fake bokeh from bad
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
via TechCrunch
0 notes
thegloober · 6 years
Text
The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface, and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung, and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera.
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, Omnivision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats, and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation, and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8mm or so, for a total of 40.6 mm2.
Roughly speaking it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors, and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous datasets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, ten, or a hundred images together into a single HDR image seems absurd but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mockup of what a line of color iPhones could look like.
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices, and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers, and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Source: https://bloghyped.com/the-future-of-photography-is-code-2/
0 notes
Link
Most newspaper restaurant critics are best known to people in the region they write about. But when Jonathan Gold, the Pulitzer-winning food critic for the Los Angeles Times, died at the age of 57 on Saturday night, the outpouring of tributes that stretched well beyond LA’s borders made it clear that he was no ordinary restaurant critic.
Of course, Gold — who had been diagnosed with pancreatic cancer only weeks earlier — was well known to Angelenos. In addition to his writing in the LA Times, he’d written for LA Weekly and Gourmet, and was a regular on KCRW’s program Good Food. His anticipated annual map of 101 great LA restaurants was a fixture for the LA food scene, a guide for locals and newbies alike.
But Gold was beloved far beyond Los Angeles. That’s not to say he “transcended” LA; it’s more that he embodied LA, embedded himself in its culture and, as many people attested following the news of his death, epitomized what Angelenos love about their city:
A very sad day for LA as Jonathan Gold left us. He was the soul of this city and all of its amazing flavors. He was a personal friend and inspiration–there will never be another like him. My heart goes out to the Gold family with the millions of Angelenos who loved him. EG
— Eric Garcetti (@ericgarcetti) July 22, 2018
I have never been sadder. Jonathan Gold is gone.
— ruthreichl (@ruthreichl) July 22, 2018
“I write about taco stands and fancy French restaurants to try to get people less afraid of their neighbors and to live in their entire city instead of sticking to their one part of town.” — Jonathan Gold. RIP to a hero and a giant. I love this city because he did.
— Andy Greenwald (@andygreenwald) July 22, 2018
yesterday I had lunch alone at Zankou Chicken, where a Gold quote about the garlic sauce is posted on the wall, and was thinking about how Zankou and Jonathan Gold are exactly what LA means to me
— Molly Lambert (@mollylambert) July 22, 2018
Yet the praise went far beyond those who lived in Gold’s city, spilling over to other artists, writers, and critics who work in various media all over the country.
Gold’s evocative prose sparkled in ways that were enjoyable to read even if you were far from LA. NPR’s tribute, for instance, cited a passage from his writing in which he “described mole negro, a Mexican dish, as so dark that it seems to suck the light out of the airspace around it, spicy as a novella and bitter as tears.” Gold continued writing until just a few weeks ago, which means you can read, for instance, his June 15 review of the new Israeli restaurant Bavel, which contains passages like this:
You will be drinking salty island wines from Sardinia and the Canary Islands. Your date will barely hear you above the din. You will wonder whether there is a point to an old-fashioned made with lamb-fat-washed bourbon or a pisco sour with pink peppercorns, and you will decide that there might be. You will probably be having a very good time.
But another factor that extended Gold’s fame past LA’s borders was the release of the acclaimed 2015 documentary City of Gold. The New York Times critic A.O. Scott wrote that the movie “transcends its modest methods, largely because it connects Mr. Gold’s appealing personality with a passionate argument about the civic culture of Los Angeles and the place of food within it.”
City of Gold, directed by Laura Gabbert and currently streaming on Hulu, follows Gold as he drives his green pickup truck through his beloved Los Angeles, eating at a handful of hole-in-the-wall, strip mall restaurants that most people just blithely sail past, talking about his career and his approach to his work. It’s an illuminating portrait not just of a writer but of a city, and as Scott put it in his review, it is “worth attending to even if you think you have no interest in food, California or criticism.”
For a critic in any medium — even, say, a New York-based film critic like myself — City of Gold is also a kind of masterclass in the things that good critics do. As many noted over the weekend, a hallmark of Gold’s writing is that he wrote not just about eating, but also about culture and about being a person, and that’s what the film underlines well.
That’s why, watching City of Gold, I actually fist-pumped a few times, as the film pointed to a lot of what made Gold such an important critic. Two in particular stuck with me, qualities that good critics aspire to, no matter what they’re writing about.
City of Gold often shows visually, on a map, the LA neighborhood in which the restaurant Gold is about to visit is geographically placed. And — as a number of people note in the film — it shows how Gold’s work often helped Angelenos connect the seemingly disparate parts of their sprawling city.
“I’m trying to get people to be less afraid of their neighbors,” Gold said in a 2015 interview.
A prime example of this is Gold’s 1998 essay “The Year I Ate Pico Boulevard,” about his experiences eating his way down a main drag that cuts across LA. It’s an essay about food, but really it’s about the culture that gave rise to that food, and the ways the connecting flavors and experiences work as a cipher for the broader city and its history.
Jonathan Gold in City of Gold. Sundance Institute
Of course, restaurant criticism is one of the few areas of critique that is expressly tied to physical locations, and thus the restaurant critic’s “mapping” job is literal.
But it’s part of other critical pursuits as well. As a film critic, for instance, I partly think of my job as “mapping” the movie terrain for the reader. Different critics do this in different ways: some are better at drawing the map on the ground of film history, others through the politics of the industry, and others through the technical and theoretical aspects of it. Some, like me, like to figure out where the paths the film carves into the cultural landscape intersect with other regions, like literature and religion and philosophy.
All critics, though, do some mapping work, and you should walk away from a good piece of criticism understanding more than just that a work of art exists, but where and how it exists. Critics are cartographers.
A distinction City of Gold makes is between the idea of writing about food and the idea of writing about eating. Gold wrote about eating.
That’s a small distinction that might not seem too important, but to the critic it’s everything. Critics can’t view things “objectively.” We’re humans. What we can do is pay very close attention to our experience with a film through the lens we bring to the table (or the screening room or the gallery or the concert), then articulate it as carefully as possible. When we’re successful, the reader feels freed to have their own experience with the film.
Gold was the living manifestation of this way of thinking about criticism. His writing, particularly in the latter part of his career, was often positive and devoted to a democratized range of restaurants. A good meal could be had anywhere, no matter the trappings. In the film, he remarks that he often went to a restaurant five times or more before writing about it, and that he doesn’t take notes because he wants to be able to absorb the experience.
Sundance Institute
Then, when he writes about it, it reads like poetry — full of descriptions that draw on a cross-pollinated blend of mediums and references. (Before he wrote about food, Gold wrote about music, and especially hip-hop.) It’s hard to evoke tastes and smells in words, but Gold pulled it off by appealing to all the senses.
Of course, what you got in his writing was Gold’s palate, not your own. But by putting words to his own experience, you got a taste of what he had experienced, and the urge to go try it for yourself. And that carried over for the experience of the chefs, too.
In the documentary, Roy Choi (of the BBQ taco truck Kogi and fast-food restaurant Locol) tries to explain: “The weird thing about my first interaction with Jonathan is he helped me figure out what I was trying to do. When he writes about me, he understands and is able to articulate the little kind of secret tangled webs I have inside that I’m trying to put out into the plate — he understands it. And I’ve never explained it to him.”
That’s what critics are after. It’s a glorious feeling to not just articulate one’s own experience, but help an artist put words to what they experienced as they made the work, too — whether it’s a movie or a painting or a song. A critic can’t read minds, but often artists aren’t able to fully explain what they’ve made, either. In an ideal situation, working in concert, criticism helps expand the art, and the art expands the critic, too.
Doing this requires a mastery of the critic’s own form, which is writing, and Gold was a master at this as well; in one of my favorite passages in the film, one commentator (performing an act of criticism, one might say) explains how Gold harnessed and used the second person point of view — that its, addressing the reader as “you” — to make his writing even better. Reading Gold, you can hear a well-read mind in love with language turning over phrases till they sparkle.
So Gold’s work does exemplify the best of what criticism has to offer. (It’s no mistake that he is, to date, the first and only restaurant critic to receive the Pulitzer Prize for criticism.)
City of Gold helps show how and why that’s true — and why, in the wake of Gold’s passing, the world could use a lot more critics, eaters, and neighbors like him.
City of Gold is available to stream on Hulu and to digitally rent on Amazon, Vudu, YouTube, iTunes, and Google Play.
Original Source -> Jonathan Gold wrote about food, but his approach to criticism was universal
via The Conservative Brief
0 notes
annabelaplit · 8 years
Text
Reading a Childhood Book Series Through a Feminist Lens
I was going to write about Dickens, I swear. I woke up bright and early at 8am this morning with a plan to get up, write a blog post about the theme of Death and Rebirth in A Tale of Two Cities, do an assortment of other homework and then write scholarship essays. But before that I wanted to spend a little time reading a set of favorite childhood books called A Series of Unfortunate Events. A television show based on the series came out a few weeks ago, and I had a party with my friends where we watched the entire thing in a day. Afterwards I decided it might be fun to revisit the series again. So in my constantly-shrinking periods of free time I have slowly been working my way through the 13 book series.
Today I was reading Book 8 and my eyes looked over a passage that made me think of the entire series in a different light. So naturally I went back and skimmed the first 8 books, read books 9-13 in their entirety, watched applicable parts of the television show and read some literary criticism. Then I realized it was 10 pm and I had accomplished exactly 0 of my academic goals. I had, however, come up with a pretty viable theory for how all these works of Gothic children’s literature are actually feminist texts. So enjoy I guess? 
A bit of background because you probably have no idea about the plot of any of these books. A Series of Unfortunate Events is written by a man named Daniel Handler, but it’s narrator is a persona called Lemony Snicket who seems to be this secretive man with a tragic past and a dead lover named Beatrice whom he mourns for consistently. He is chronicling the lives of three orphans as they struggle to protect their late parents fortune and their own lives from a man named Count Olaf. Basically they go to stay with a series of eccentric guardians or end up working and living in a series of eccentric places, and Count Olaf follows them, often in disguise with a bunch of henchmen, and concocts various schemes to get their fortune. Some main themes deal with critiquing adult authority and basic societal institutions, reckoning with the cynicism that comes with growing up, and exploring the ideas of morality and moral relativism. It’s peppered with all sorts of literary references, and it’s much darker than a typical children’s series. Overall it makes for a good read, even for an adult. 
Anyway the three orphans are named Violet, Klaus, and Sunny Baudelaire and they are 14, 13, and an infant respectively. They all have their own special talents which they repeatedly use to get out of all kinds of tricky scenarios. The youngest, Sunny, has unusually sharp teeth with she uses to chop things, bite people, climb walls, and sword-fight. As the series progresses she also reveals a latent talent for cooking, Klaus, the middle child, is a voracious reader and researcher with an encyclopedia of knowledge in his head. And Violet, the eldest child, is a skilled inventor with a Mac Gyver like knack for using household objects to escape dangerous situations. 
I think that Handler’s choice to make Violet an inventor is really interesting in a feminist context. Female inventors or females in any kind of engineering, scientific, or mechanical job are less common, both in literature and real life. The lack of women in STEM fields is a real and documented problem. But it is Violet, a girl that has this talent, rather than the bookish skills that are slightly more associated with women. Tison Pugh the author of a literary article about the role of gender in A Series of Unfortunate Events writes,
“Gender roles in the series are additionally undermined through the reversals of gendered norms that have already been reversed. Violet may be coded as somewhat masculine due to her inventing skills, and Klaus may be coded as somewhat feminine due to his inveterate reading, but their respective tendencies in regard to gendered activities do not limit their potential to act in new ways....Gendered categories are rendered meaningless for the Baudelaire children, who express the freedom and agency to strip themselves of the prescriptive cast of gender's historical enactments”
So Violent Baudelaire’s identity is primarily based on the fact that she is an inventor not that she is a girl. However most of the people around her, especially the adults view her more in the context of her gender than in the context of her skills and talents. Characters that can be considered “friendly” towards the orphans exhibit a lot of subtle and sometimes blatantly sexist behavior towards her. 
For instance, in Book #3, The Wide Window Josephine Anwhistle,  the new guardian of the Baudelaires, gives gifts to the children.
"For Violet," she said, "there is a lovely new doll with plenty of outfits for it to wear." Aunt Josephine reached inside and pulled out a plastic doll with a tiny mouth and wide, staring eyes. "Isn't she adorable? Her name is Pretty Penny."
 "Oh, thank you," said Violet, who at fourteen was too old for dolls and had never particularly liked dolls anyway. Forcing a smile on her face, she took Pretty Penny from Aunt Josephine and patted it on its little plastic head
Now to a certain level this incident can just be seen as disinterested parenting instead of a specific attack on gender. However, Josephine’s entire rationale for giving Violet this gift is that Violet is a girl and girls like dolls. She sees the 14 year old in the context of her gender instead of seeing her for her talents and interests outside the scope of her gender. 
In Book #7, The Vile Village Count Olaf is trying to frame the orphans for murder and he uses a hair ribbon for evidence
“He reached into the pocket of his blazer and brought out a long pink ribbon decorated with plastic daisies. "I found this right outside Count Olaf's jail cell," he said. "It's a ribbon — the exact kind of ribbon that Violet Baudelaire uses to tie up her hair.”
The townspeople gasped, and Violet turned to see that the citizens of V.F.D. were looking at her with suspicion and fear, which are not pleasant ways to be looked at.
"That's not my ribbon!" Violet cried, taking her own hair ribbon of her pocket. "My hair ribbon is right here!"
"How can we tell?" an Elder asked with a frown. "All hair ribbons look alike."
"They don't look alike!" Klaus said. "The one found at the murder scene is fancy and pink. My sister prefers plain ribbons, and she hates the color pink!"
Here the townspeople of the Village of Fowl Devotees, the ostensible guardians of the children, decide that Violet is guilty based on the assumption that she would wear a fancy pink hairband because she is a girl. 
In Book #11, The Grim Grotto, Klaus helps a sea captain find the location of an important object. The excited captain explains, 
 “Aye! You're sensational! Aye! If you find me the sugar bowl, I'll allow you to marry Fiona!” 
“Stepfather!” Fiona cried, blushing behind her triangular glasses.
 “Don't worry,” the captain replied, “we'll find a husband for Violet, too! Aye! Perhaps we'll find your long-lost brother, Fiona! He's much older, of course, and he's been missing for years, but if Klaus can locate the sugar bowl he could probably find him! Aye! He's a charming man, so you'd probably fall in love with him, Violet, and then we could have a double wedding! Aye! Right here in the Main Hall of the Queequeg ! Aye! I would be happy to officiate! Aye! I have a bow tie I've been saving for a special occasion!” 
“Captain Widdershins,” Violet said, “let's try to stick to the subject of the sugar bowl.” She did not add that she was not interested in getting married for quite some time”
Captain Widdershins decides that marriage is a suitable reward for the Baudelaire’s help. First he thinks of  marrying his stepdaughter to Klaus and then of marrying his long lost stepson to Violet. At this point the Captain has considerable knowledge of Violet’s personality and her passion for inventing, but he still thinks that she is more interested in love and marriage than anything else. 
 In Book #12, The Penultimate Peril, a mentor of the Baudelaires named Kim Snicket attempts to show the children how much they can accomplish,
"When your parents died," Kit said, "you were just a young girl, Violet. But you've matured. Those aren't the eyes of a young girl. They're the eyes of someone who has faced endless hardship. And look at you, Klaus. You have the look of an experienced researcher-not just the young reader who lost his parents in a fire. And Sunny, you're standing on your own two feet, and so many of your teeth are growing in that they don't appear to be of such unusual size, as they were when you were a baby. You're not children anymore, Baudelaires. You're volunteers, ready to face the challenges of a desperate and perplexing world”
I think here the sexism comes across pretty subtly. Kit is trying to talk about how all three of the children have changed as a result of what they have faced after the deaths of their parents. Sunny goes from a baby with unusually large teeth to a older child with more normal teeth. Klaus goes from a “young reader” to a “experienced researcher”. But Violet goes from a “young girl” to someone who has faced endless hardship. Both Sunny and Klaus experience change based on their talents: reading and biting. But Violet’s change is related to her femininity rather than her skills or talents. Arguably Kit is the best ally the orphans have in any of the novels, but she still falls prey to subtle gender stereotyping
Probably the greater instance of sexism in the novels is in Book #2, The Reptile Room. The banker Mr. Poe who is in charge of placing the orphans in the care of various guardians discovers that Violet has picked a lock in order to discover vital information about how Count Olaf murdered the herpetologist Dr. Montgomery Montgomery.
"It was an emergency," Violet said calmly, "so I picked the lock."
"How did you do that?" Mr. Poe asked. "Nice girls shouldn't know how to do such things."
"My sister is a nice girl," Klaus said, "and she knows how to do all sorts of things.
"Roofik!" Sunny agreed.
I think this one is pretty self-explanatory
This behavior isn’t entirely unique to adults. There are only two male characters in all 13 books that are approximately Violet’s age and the connections she has with both of them are to some degree romantic. The only people that seem to view her completely outside of the context of her gender are her siblings. However Violet’s romantic interests, most of the other child characters, and some of the adults recognize Violet’s innate inventing skills. They see her as being more than just a girl and it is important that this way of looking at her is connected with the “good” people. Readers, especially impressionable children (including me at that age), see how it is right to value Violet as an inventor instead of as a girl. 
If those with friendly towards Violet can be said to sometimes value her gender over her talent and skills in other areas, those who are the enemies of the Baudelaire can be said to treat her far worse. They see her only in the context of her physical attractiveness rather than in the context of any of her other attributes. 
Violet is pretty and this is an attribute that Count Olaf and his various henchmen unerringly pay attention to throughout the series. Some examples,
In Book #1, The Bad Beginning the children are put in the care of Count Olaf and his various evil henchmen
“Nobody paid a bit of attention to the children, except for the bald man, who stopped and stared Violet in the eye.
 "You're a pretty one," he said, taking her face in his rough hands. "If I were you I would try not to anger Count Olaf, or he might wreck that pretty little face of yours." Violet shuddered, and the bald man gave a high-pitched giggle and left the room.”
Here an evil henchmen pays attention to Violet over the other children only because she is attractive and then he uses her attractiveness as the basis of a threat. 
At one point in Book #1 Count Olaf asks the children to participate in a play he is producing,
"And what will I do?" Violet asked. "I am very handy with tools, so perhaps I could help you build the set."
 "Build the set? Heavens, no," Count Olaf said. "A pretty girl like you shouldn't be working backstage." 
 "But I'd like to," Violet said.
Here Count Olaf literally devalues Violet’s mechanical and technical skills in favor of her physical looks. 
At one point in the Bad Beginning Count Olaf imprisons Sunny in a cage and Violet gets captured by a hook handed henchman while trying to rescue her
" How pleasant that you could join us," the hook-handed man said in a sickly sweet voice. Violet immediately tried to scurry back down the rope, but Count Olaf's assistant was too quick for her. In one movement he hoisted her into the tower room and, with a flick of his hook, sent her rescue device clanging to the ground. Now Violet was as trapped as her sister. "I'm so glad you're here," the hook-handed man said. "I was just thinking how much I wanted to see your pretty face. Have a seat."
Seeing Violet just climb a 30ft building using a homemade invention this henchmen’s first thought is to mention how to wanted to see Violet because she is attractive.
Things cool down with all the references to Violet’s looks until midway through the series. In Book #9, the Carnivorous Carnival. Count Olaf and his associates talk about which children they would most like to have survived the fire they set at a hospital. 
"I hope it's Sunny," the hook-handed man said. "It was fun putting her in a cage, and I look forward to doing it again." 
"I myself hope it's Violet," Olaf said. "She's the prettiest."
Rather than any of her various other merits the sole reason Count Olaf mentions wanting Violet to be alive is because she is “pretty”
In Book #11, The Grim Grotto the Baudelaires need to convince Count Olaf to help Sunny who has been poisoned by a rare fungus, 
Sunny coughed inside her helmet, and Violet thought quickly. “If you let us help our sister,” she said, “we'll tell you where the sugar bowl is.” 
Count Olaf's eyes narrowed, and he gave the children a wide, toothy grin the two Baudelaires remembered from so many of their troubled times. His eyes shone brightly, as if he were telling a joke as nasty as his unbrushed teeth.
 “You can't try that trick again,” he sneered. “I'm not going to bargain with an orphan, no matter how pretty she may be. Once you get to the brig, you'll reveal where the sugar bowl is – once my henchman gets his hands on you. Or should I say hooks? Hee hee torture!”
Here Count Olaf feels the need to slip in the fact that Violet’s looks play a role in this bargaining process. 
But for me the passage which illustrates this phenomena best comes in Book #8, The Horrible Hospital. This was the passage that got me started on this whole 18 hour researching and blog post project. The Horrible Hospital was the first of the Series of Unfortunate Events that I read, way back in the 3rd grade. Something about it stuck out to me then and today I realized what that was. Its background is that Count Olaf has captured Violet and is going to basically saw off her head but make it look like a surgical procedure. Klaus and Sunny have disguised themselves as evil henchmen nurses and are attempting to find a way to break their sister out of the hospital. 
The bald man took a key out of the pocket in his medical coat, and unlocked the door with a triumphant grin. "Here she is," he said. "Our little sleeping beauty." 
 The door opened with a long, whiny creak, and the children stepped inside the room, which was square and small and had heavy shades over the windows, making it quite dark inside. But even in the dim light the children could see their sister, and they almost gasped at how dreadful she looked.
 When the bald associate had mentioned a sleeping beauty, he was referring to a fairy tale that you have probably heard one thousand times. Like all fairy tales, the story of Sleeping Beauty begins with "Once upon a time," and continues with a foolish young princess who makes a witch very angry, and then takes a nap until her boyfriend wakes her up with a kiss and insists on getting married, at which point the story ends with the phrase "happily ever after." The story is usually illustrated with fancy drawings of the napping princess, who always looks very glamorous and elegant, with her hair neatly combed and a long silk gown keeping her comfortable as she snores away for years and years. But when Klaus and Sunny saw Violet in Room 922, it looked nothing like a fairy tale. 
 The eldest Baudelaire was lying on a gurney, which is a metal bed with wheels, used in hospitals to move patients around. This particular gurney was as rusty as the knife Klaus was holding, and its sheets were ripped and soiled. Olaf's associates had put her into a white gown as filthy as the sheets, and had twisted her legs together like vines. Her hair had been messily thrown over her eyes so that no one would recognize her face from The Daily Punctilio, and her arms hung loosely from her body, one of them almost touching the floor of the room with one limp finger. Her face was pale, as pale and empty as the surface of the moon, and her mouth was open slightly in a vacant frown, as if she were dreaming of being pricked with a pin. Violet looked like she had dropped onto the gurney from a great height, and if it were not for the slow and steady rise of her chest as she breathed, it would have looked like she had not survived the fall. Klaus and Sunny looked at her in horrified silence, trying not to cry as they gazed at their helpless sister.
"She's a pretty one," the hook-handed man said, "even when she's unconscious." 
 "She's clever, too," the bald man said, "although her clever little brain won't do her any good when her head has been sawed off." ....
 Although her siblings preferred to think about her inventing abilities and conversational skills rather than her physical appearance, it was true, as the hook-handed man had said, that Violet was a pretty one, and if her hair had been neatly combed, instead of all tangled up, and she had been dressed in something elegant and glamorous, instead of a stained gown, she might indeed have looked like an illustration from "Sleeping Beauty." 
There is a lot to take in here. Handler does a really great job subtly deconstructing the fairly tale of Sleeping Beauty and making it look stupid. He first does through diminishing the action to things like taking a nap and suggesting that getting married doesn’t necessarily equate a happy ending. The idea of a sleeping beauty is then contrasted by Violet’s decrepit treatment and quite sad appearance as she lays unconscious on the gurney. The two henchmen talk about her attributes, focusing primarily on her natural beauty as opposed to her “cleverness”. But the most important part of the scene might be that last paragraph where Snicket talks about Violet’s beauty but not before mentioning, “her siblings preferred to think about her inventing abilities and conversational skills rather than her appearance”. It is an unobstructed fact that Violet is beautiful but only the “bad” characters focus on her beauty while the “good” characters think of her outside of this context. It is a really subtle morality lesson, saying that it is correct to think of girls is outside the scope of traditional fairy tales, and it is a lesson that personally reverberated with me after the reading the book for the first time. 
There is an elephant in the room regarding the way in which Violet is viewed by her enemies. An analysis can’t be complete without referring to the fact that Count Olaf tried to marry Violet, who I should remind you was 14 throughout A Bad Beginning and throughout most of the series. This almost-marriage was solely a convoluted way for Count Olaf to get his hands on the Baudelaire fortune, and it seems a lot less strange when you read the actual books, and this whole plot is foiled which some hand signing shenanigans. But the whole thing is still quite weird and it has some weird implications as well as spawning some more commentary on Violet’s character in relation to her gender and appearance. 
The reason that Count Olaf imprisoned Sunny in a cage above his house was to force Violet to participate in a legitimate marriage ceremony embedded in a play he was putting on. 
"Come now," Count Olaf said, his voice faking—a word which here means “feigning"— kindness. He reached out a hand and stroked Violet's hair. "Would it be so terrible to be my bride, to live in my house for the rest of your life? You're such a lovely girl, after the marriage I wouldn't dispose of you like your brother and sister. 
 Violet imagined... wandering around the house, trying to avoid [Count Olaf] all day, and cooking for his terrible friends at night, perhaps every night, for the rest of her life. But then she looked up at her helpless sister and knew what her answer must be. "If you let Sunny go," she said finally, "I will marry you
Okay this is really really creepy especially for a children’s novel. But is also interesting to note how Violet’s physical appearance is the sole reason Count Olaf seems inclined to treat her with a modicum of kindness afer their theoretical marriage. Violet’s view of how marriage with the villain will be also seems to her involve being a sort of housewife who cooks dinner for the Count’s henchpeople every night. Her life with him would have nothing to do with her actual talents and everything to do with her attractiveness and traditional gender roles. 
There is also a really important interaction with Violet, Count Olaf and the Hook Handed Man right after she gets caught trying to break Sunny out of her cage. 
“ The hook-handed man reached into a pocket of his greasy overcoat and pulled out a walkie-talkie. With some difficulty, he pressed a button and waited a moment. "Boss, it's me," he said. "Your blushing bride just climbed up here to try and rescue the biting brat." 
He paused as Count Olaf said something. "I don't know. With some sort of rope." 
 "It was a grappling hook," Violet said, and tore off a sleeve of her nightgown to make a bandage for her shoulder. "I made it myself." "
She says it was a grappling hook," the hook-handed man said into the walkie-talkie. "I don't know, boss. Yes, boss. Yes, boss, of course I understand she's yours. Yes, boss." He pressed a button to disconnect the line, and then turned to face Violet. "Count Olaf is very displeased with his bride. " 
 "I'm not his bride," Violet said bitterly. 
 "Very soon you will be," the hook-handed man said
Once again this whole thing is pretty creepy and weird especially without the context of the whole novel. But throughout this passage Violet is referred to by the bad guys in the context of her role as a future wife, even as they discuss what she has done in her role as an inventor. There is also the whole thing with Violet belonging to Count Olaf which sexist in a pretty blatant way. 
So overall the fact that Violent’s appearance and actions in traditional gender roles are granted such importance by the bad characters and are neglected by the good characters signals to readers that that sort of behavior is bad in general. It teaches them that girls shouldn’t be just thought of as “pretty” and future wives, they can be inventors, or researchers, or poets, or spies. 
Okay I’m not quite done yet. I want to talk about Violet’s “foil” in this story, a girl named Carmelita Spats. We are first introduced to Carmelita in Book #5, and she comes back in Book #10, Book #11, and Book #12. The first sentence of Book #5, the Austere Academy is 
“If you were going to give a gold medal to the least delightful person on Earth, you would have to give that medal to a person named Carmelita Spats, and if you didn't give it to her, Carmelita Spats was the sort of person who would snatch it from your hands anyway. Carmelita Spats was rude, she was violent, and she was filthy, and it is really a shame that I must describe her to you, because there are enough ghastly and distressing things in this story without even mentioning such an unpleasant person”
So basically Carmelita Spats is THE WORST PERSON EVER. But besides being rude and greedy and doing a lot of mean stuff to people Carmelita is quite obsessed with her looks and fitting into traditional gender roles. In the Austere Academy she informs the Baudelaires
"I have a message for you from Coach Genghis. I get to be his Special Messenger because I'm the cutest, prettiest, nicest girl in the whole school”
Notice how two of the three adjectives she uses to describe herself deal with her physical attractiveness. This self obsession is only heightened later on. In Book #10 The Slippery Slope Carmelita relates to the orphans a truly dull and  awful story, 
"Once upon a time, I woke up and looked in the mirror, and there I saw the prettiest, smartest, most darling girl in the whole wide world. I put on a lovely pink dress to make myself look even prettier, and I skipped off to school where my teacher told me I looked more adorable than anyone she had ever seen in her entire life, and she gave me a lollipop as a special present"
Look how Carmelita thinks of herself, she mentions once that she is smart, but she mostly talks about how attractive she is, how she is the “prettiest”, the “most darling”, “adorable”. She gets rewarded by a teacher for being beautiful instead of being smart. She also talks wearing a pink dress to become more attractive, something that fits in with tradtional gender stereotypes, and can be seen as the opposite of Violet with her plain hair ribbon. 
Later on in The Slippery Slope Count Olaf’s girlfriend Esme Squalor asks Carmelita to join their band of villains. Her sales pitch?
"I think you're adorable, beautiful, cute, dainty, eye-pleasing, flawless, gorgeous, harmonious, impeccable, jaw-droppingly adorable, keen, luscious, magnificent, nifty, obviously adorable, photogenic, quite adorable, ravishing, splendid, thin, undeformed, very adorable, well-proportioned, xylophone, yummy, and zestfully adorable," Esmé pledged, "every morning, every afternoon, every night, and all day long!"
Almost every single one of those compliments have to do with Carmelita’s looks. There is barely any mention of any other skills or talents Carmelita might have. With references to being “thin” and “well-proportioned” the idea that beautiful women ought to be skinny is also enforced. Carmelita is entirely defined by her femininity, while Violet’s status as a girl is tangential to her personality. 
When we meet up with Carmelita in Book #11, The Grim Grotto her reliance on traditional gender roles is even more enforced. 
“Carmelita had always been the sort of unpleasant person who believed that she was prettier and smarter than everybody else, and Violet and Klaus saw instantly that she had become even more spoiled under the care of Olaf and Esmé. She was dressed in an outfit perhaps even more absurd than Esmé Squalor's, in different shades of pink so blinding that Violet and Klaus had to squint in order to look at her. Around her waist was a wide, frilly tutu, which is a skirt used during ballet performances, and on her head was an enormous pink crown decorated with light pink ribbons and dark pink flowers. She had two pink wings taped to her back, two pink hearts drawn on her cheeks, and two different pink shoes on each foot that made unpleasant slapping sounds as she walked. Around her neck was a stethoscope, such as doctors use, with pink puffballs pasted all over it, and in one hand she had a long pink wand with a bright pink star at the end of it. 
“Stop looking at my outfit!” she commanded the Baudelaires scornfully. “You're just jealous of me because I'm a tap-dancing ballerina fairy princess veterinarian!”
Carmelita has grown more spoiled but she has also grown even more attached to feminine stereotypes. Look at her blindingly pink outfit, covered with hearts, ribbons, and flowers. Her career choices are ballerina, fairy, princess, and veterinarian, all of which are tradtionally associated with females except maybe veterinarians. Even then her stethoscope is covered with “pink puffballs”. Carmelita has basically become the very embodiment of female stereotypes, compiling every traditional desire of little girls into one person. Her constant reminder of what a “typical girl” should look be directly contrasts how Violet acts. And since Carmelita is “the least delightful person on earth” and we know Violet is wonderful from spending 11 books with her, readers feel that Violet’s is the example that they should follow.
The only other thing interesting to note about Carmelita Spats is what happens to her in Book #12 the Penultimate Peril. Violet sees her playing in a pool on the rooftop of a hotel. 
The last time Violet had seen the unpleasant captain of this boat, she was dressed all in pink, and was announcing herself as a tap-dancing ballerina fairy princess veterinarian, but the eldest Baudelaire could hardly say whether being a ballplaying cowboy superhero soldier pirate was better or worse. 
 "Of course you are, darling," purred Esmé, and turned to Geraldine Julienne with a smile one mother might give another at a playground. "Carmelita has been a tomboy lately," she said, using an insulting term inflicted on girls whose behavior some people find unusual. 
 "I'm sure your daughter will grow out of it," Geraldine replied, who as usual was speaking into a microphone”
This time the lesson on gender stereotypes doesn’t come from Carmelita’s actions but from Snicket himself. Carmelita has changed from an incredibly feminine person to one with masculine interests. Rather than legitimately accepting her ward’s more non-tradtional interests Esme insists that she is just going through a phase, and a local reporter insists that these interests will soon change. But Snicket calls out these adults and others who use the term tomboy. He calls the term “insulting” and describes it as being “inflicted” on girls. The message is clear: it’s perfectly acceptable if one wants to act outside the bounds of traditional gender stereotypes and children shouldn’t be shamed for it. 
A few other minor notes. Handler likes to comment on the gendered quality of words and phrases in various places in the series. At one point a female villain says that she prefers to use the term henchperson as opposed to henchman. And the stepdaughter of Captain Widdershins insists on inserting the phrase “or she” in the Captain’s personal motto of “He who hesitates is lost”. He also deconstructs more fairy tales such as Cinderella and Little Red Riding Hood. 
There are also other instances of the behaviors I have described here in the Netflix show based on the first four books of A Series of Unfortunate Events. The teleplays for that series was also written by Daniel Handler. In the Episode A Bad Beginning Part 2 one of Olaf’s henchpeople states that
“I just think, even in changing context, that marriage is an inherently patriarchal construction that is likely to further the hegemonic juggernaut that's problematizing a lot of genders”
This is intended to be comic relief but you can see some of Handler’underlying messages about gender roles in the statement. There is also a truly creepy scene in that episode where Klaus insists, “You will never touch our fortune” and Count Olaf replies, “Klaus, I’ll touch whatever I want” and then squeezes Violet’s shoulder. It is just another instance of Violet’s physical features being valued over her mental ones.
Also in the Episode: The Reptile Room Part 2 the critique that Violet shouldn’t pick locks because she is a nice girl is stated by both Mr. Poe and Count Olaf on separate occasions. Count Olaf also insists that he is willing to settle for taking just Violet to Peru, where there are lax childcare laws, instead of all four siblings. 
Alright that is literally everything I can possibly think of to say about this book series from a feminist perspective. Through the views of different characters on Violet Baudelaire's attributes readers can understand how treating girls in certain ways is inappropriate. Girls are more than just their looks, they are more than traditional gender roles, and their identities based on their talents and skills are just as important as their gender identity. When you read A Series of Unfortunate Events you may think you are reading a children’s story about secret organizations and eccentric guardians but you’re actually reading subtle feminist propaganda. As a huge fan of the books when I was young, it is invigorating to look at them through this lens and I am glad I got to use my AP Lit skills for something I am so passionate about. 
Also it’s 2:30 in the morning and I have literally spent 18 hours researching and writing this post. It’s time for bed yo!
0 notes
robininthelabyrinth · 7 years
Text
Fic: Interconnect (ao3 link) - Chapter 6 Fandom: Flash, DC Legends of Tomorrow Pairing: Mick Rory/Leonard Snart
Summary: Fate has decided that Leonard Snart and Mick Rory are soulmates.
Yeah, okay, they’re good with that.
(for @coldwaveweek2017)
A/N: Instead of doing different fics for coldwave week, I decided to do one with multiple chapters, each based on the various days.
Chapter 6: Jealousy/Protectiveness
—————————————————————————————–
"How do you plead?" the court asks.
Mick glances at his lawyer, who nods.
"Not guilty, your Honor," Mick says. "By reason of curse."
He tries to sit down - his job's done at this point, unless the judge has any specific questions for him today - but the prosecutor, who'd been standing there looking smug, is squawking and the judge looks interested.
"Explain," the judge says.
"Your Honor," Mick's lawyer says, "the prosecution is correct that my client has a history of violence - specifically arson - and that the facts clearly show that he committed the actual act of murder here, but in the present instance, we're arguing that he couldn't help himself by reason of curse."
"There is no legal basis -" the prosecutor starts hotly, but the judge holds up a hand.
"What curse?"
"Soulmates, your Honor," Mick's lawyer says. "The individual in question was abusing my client's soulmate, causing him to react with excess violence."
"Soulmates," the judge echoes, frowning.
"There is some precedent, your Honor, albeit quite old," Mick's lawyer says. That's understating it - the cases they're submitting are over a hundred years old at least. "We'll be submitting them with our papers."
The judge is frowning, but he's also looking thoughtful. "Soulmates," he says again. "And his condition is certified?"
"Yes, your Honor. The certification was stamped and notarized by the hospital witch consultant that originally recognized the disorder upon his admission at age eleven -"
"How long ago was that?" the prosecutor snipes.
"- and again by the local hospital witch," Mick's lawyer continues, ignoring him, though he does add pointedly, "just last week."
Mick's moderately pleased that the restrictions on witchcraft in medical care have at least been lifted again, at least enough for the certification. Though getting and giving fortunes (and spells and curses) is still quasi-illegal...
"I'll accept it for now," the judge decides. "My sympathies to Mr. Rory. Is there anything else?"
"No, your Honor," both the prosecutor and Mick's lawyer say in unison, both rising to their feet for a moment to do so.
"Dismissed, then. I'll see you again in -" He checks his calendar. "Two weeks. Does 10:30 work for you?"
Again, a chorus of consent.
Mick walks out the side door, back to prison, but it's not long until his lawyer's scurrying out to see him. "Spoke with the prosecutor," he reports. "I think they'll give us a very favorable plea bargain, just to avoid the risk of creating new precedent that could be used by other cursed."
Mick nods. That'd been the plan all along. "And I won't have to testify?"
"For some reason," his lawyer, a very earnest Indian man named Rakesh Narayanan with a surprising capacity for subtle sarcasm, says, "I wasn't planning on letting you. Unless your position has changed from 'the bastard deserved it'?"
"Nope."
"Then no. Unless you insist - and it is ultimately up to you, I'm just your lawyer - no testifying."
"Probably for the best."
His lawyer rolls his eyes. "Tell Lenny to tell Lisa I said hi," he says. He'd been a friend of hers in school; he was pretty new at this whole defense lawyer business. "And - would it be wrong to say 'congratulations on your bereavement'?"
Mick smiles. "I'll pass it along," he promises.
He does, sitting in the van taking him back to Iron Heights.
"You're a dick," his handcuffs tell him, but Len doesn't sound displeased. "You didn't have to take the fall, you know."
"I've got a good defense," Mick points out. "And people get twitchy around people who kill members of their own family, even if it is their horribly abusive dads."
"Still..."
Mick feels a fond smile come on involuntarily. "You're mine, Lenny," he reminds him. "If I don't take care of you, who will?"
Len grumbles but agrees.
"Oh, and Len?"
"Yeah?"
"Congrats on your bereavement."
Len starts laughing. A little hysterically, but it's fine; Barry and the rest of the STAR Labs team is keeping a close eye on him while Mick gets prosecuted in his place. He's getting lots of therapy, which is good - after all, he's the one who killed Lewis, in the end, in order to protect Lisa.
Mick's just the one who burned the body.
It’s not the first or last thing he’d do for Len, taking this on his shoulders, and every time he does –
He’s proud.
-------------------------------------------------------------------------------
“You know I don’t like to talk feelings often,” Len says. “But lately, I find myself compelled to discuss ‘em.”
Mick, who’d been getting out of the shower and is still only clad in a towel, freezes up and stares at Len, bug-eyed.
“Feelings,” Len says meaningfully. “Recent ones.”
“Uh,” Mick says.
“Specifically,” Len continues, “I’ve noticed that I’ve started feeling – jealous.”
“Jealous?”
Len nods.
“Of what?”
“We agreed a long time ago that jealousy was probably an undeniable part of our –” Yeah, no, Len can’t manage to say ‘relationship’. “– of what we’ve got going on. After all, we never got a chance to choose each other. We just – are. So, sometimes jealousy’s gonna be a factor.”
Mick nods, very cautiously.
“And it ain’t like it hasn’t happened before. You remember – there was that whole thing with what’s his name, Trevor?”
“Oh, right,” Mick says. “The asshole who kept creeping on you behind your back and I thought he was stalking you so I got in his face and started following him to make him stop, except then you thought I was into the guy and flipped your shit?”
“I did not,” Len says with great dignity, “flip my shit.”
“You kneecapped him.”
“He deserved it. He deliberately sabotaged the job.”
“Yeah, but you didn’t know it at the time. You just took credit later because it made you seem like a genius.”
Len shrugs. Mick’s not wrong. “We’re getting away from the point,” he says.
“And the point is – jealousy,” Mick says. “Uh. Are we kneecapping someone today, boss?”
He’s clearly running through every single person he’s interacted with in the last month and coming up empty.
“No,” Len says. “We’re older and wiser than we were during the Trevor incident –”
“That was only three years ago, boss. It hasn’t been that long.”
“Regardless,” Len stresses. “I thought it’d be better to talk about it. Like the reasonable adults we are.”
Mick looks horrified. “Are you sure we can’t go with the kneecapping?” he asks hopefully.
“Not in this case.”
“If it’s because it’s me you’re mad at, we could fight it out,” Mick offers. “I’d let you beat me up.”
“I’ll have you know that if I wanted to, I could beat you up without you letting me.”
“You just keep telling yourself that,” Mick says comfortingly. “But – really – does it have to be talking? About feelings?”
“I’m afraid so, Mick. This can’t be solved by anything less than that.”
Mick gulps but squares his shoulders grimly. “Okay,” he says. “Hit me.”
“Recently, I’ve been feeling that you’ve been focused on – other things. Other than me. Now, I’m not saying I’m high maintenance –”
“You are the most high maintenance,” Mick mumbles.
“Shut up, I’m talking here. I don’t need you to pay attention to me all the time. Hell, I’d probably punch you in the face if you did.”
Mick’s face is disbelieving, but Len glares at him and he nods in consent. Not agreement. Len knows the difference.
“That being said, I sometimes get jealous if I feel like you’re spending more time away from me than with me,” Len says. “If I see you putting all your focus somewhere else.”
“Do I get a name at any point here?” Mick asks.
“I’m getting there. I just want you to understand how I feel about your recent obsession, that’s all.”
“Wait,” Mick says. “Is this about the cooking class?”
“You spend all your time thinking up new things for it!” Len protests. “You’re always on the phone with your students, or with your co-workers, or trying new recipes – you’re even trying out for that stupid reality TV cooking show –”
“For the love of – that was a joke! The Great British Bake Off only takes Brits!”
“Either way, I barely see you, and –”
“You massive, massive hypocrite,” Mick says, gaping starting to turn into a grin. “You, who spends literally days on job planning? Who I have to literally pick up to take you away from your blueprints? Who I’ve had to sit on to get to go to sleep so you wouldn’t die?”
“You can go more than three days without sleep before you die,” Len grumbles. He’d never believed that study about it causing hallucinations, anyway. “I know you can. Besides, that’s our livelihood. Not some hobby.”
“My point remains: hypocrite.”
“I am not. That’s normal for me. This isn’t.”
“Awwwww, it’s okay,” Mick simpers at him. “I still love you more than my cooking class.”
“That’s all I wanted to hear,” Len says primly.
Naturally, that’s when Mick’s eyes narrow. “And you wouldn’t be doing this if you didn’t have an ulterior motive.”
Len widens his eyes innocently.
“Okay, now I’m worried. What’s your play here?”
“I can’t just want some assurances of your feelings?”
“No. Spill.”
Len resists for a few moments, but a glaring, grinning, mostly naked Mick is hard to resist.
Also, Len loves bragging about his ridiculous ideas.
“So, you know that joke you made about the reality TV show?” Len asks.
“…yeah?”
“Let’s say theoretically they were filming one in Central –”
“No.”
“You haven’t even heard the pitch.”
“No!”
“Superheroes and Supervillains,” Len says with glee.
Mick hesitates. “Do any of them even know how to bake?” he asks suspiciously.
“Harley,” Len replies promptly. “But Ivy’s nagging on her about salad. But seriously, think about it – the best of the worst. All the assholes we have to deal with. Baking. Scarlet even promised to make a appearances to help eat it all.”
Mick scowls at him.
“I’m getting Mardon to compete and made him promise he’d try to fry an egg with a lightning strike.”
“Okay, fine,” Mick groans. “I give in. I’ll listen to the pitch. But I’m warning you, I am not agreeing!”
“Of course not,” Len says soothingly. “Now, as I was saying…”
He knew that softening Mick up first would work.
25 notes · View notes
Text
The future of photography is code
New Post has been published on https://photographyguideto.com/must-see/the-future-of-photography-is-code/
The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
See the new iPhone’s ‘focus pixels’ up close
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
A system to tell good fake bokeh from bad
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Read more: https://techcrunch.com
0 notes
thegloober · 6 years
Text
The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface, and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung, and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera.
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, Omnivision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats, and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation, and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8mm or so, for a total of 40.6 mm2.
Roughly speaking it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors, and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous datasets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, ten, or a hundred images together into a single HDR image seems absurd but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mockup of what a line of color iPhones could look like.
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices, and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers, and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Source: https://bloghyped.com/the-future-of-photography-is-code/
0 notes