Introduced fall 1990 for the 1991 model year. Very high miles. Some rust. Has not aged well. Runs OK. Requires high mileage synthetic lubricants and frequent and expensive maintenance intervals. Owners manual went missing at 116,000 miles before going through auction and puttering onto a derelict downtown buy-here-pay-here lot.
Don't wanna be here? Send us removal request.
Text
Social Media Is Bad For Democracy? Well, Duh. But It Has Potential To Not Be All Bad...

Ever hear of a guy named Sean Lawson? According to his header blurb at Forbes, he writes about “science, technology, and security.” According to his footer bio at Forbes (...uh, how many bios and blurbs does one contributor need, Forbes?) Lawson is also an “Associate Professor in the Department of Communication at the University of Utah.” I would imagine he would tell you he’s kind of a big deal.
So why is this post starting out with talking about a communications professor from the University of Utah who contributes to a business magazine’s website? Well, earlier this month, Lawson shipped in a column to Forbes.com titled “Evidence Mounts of Social Media’s Negative Impacts for Democracy.”
The article, in a nutshell, uses a recent study from democracy watchdog group Freedom House that looked at internet freedom to bolster its position that, basically, social media is to democracy as the Olive Garden is to Italian cuisine. As Lawson also points out in his editorial, the study warned of a “crisis of social media” and that it is “now tilting dangerously toward illiberalism.”
Know what? I find it hard to disagree with Lawson, if I’m being honest. While some might argue that the jury isn’t entirely in yet on what influence social media platforms have on an Election Day near you, the fact that government agencies — such as the NSA — have used them to spy on people of interest and citizens of their own countries alike cannot be refuted. And, last I checked, “totalitarian surveillance” isn’t a concept we associate with a free and democratic society.
That said, a question begs to be asked: Does social media have the potential to be used to aid democracy? You follow through that durned ol’ jump and I reckon we’ll try and find out.
Every week, after the jump, I usually take some time to dive more in depth about whatever article my ramblings are examining. This week, I’ll spending just a little less time on that, simply because I don’t feel the need to pick apart everything that Lawson has written.
...It’s OK. You can take a moment to celebrate that.
The only thing I feel I should address is something that both Lawson’s article and the Freedom House report seem to suggest: That the abuse and misuse of social media platforms to spread misinformation were a huge reason why recent elections have turned out the way that they did.
There’s little doubt that social media platforms were used by dubious individuals and entities to spread misinformation during those elections; the evidence is overwhelmingly there. But what sort of impact does that misinformation actually have?
A recent study published in scientific peer-reviewed journal PLOS One looked at the infamous 2016 Presidential Election between Donald Trump and Hillary Clinton — an election where social media is strongly believed to have caused the outcome. The study, helmed by Ohio State University communications professor Kelly Garrett, suggests that we could be — for lack of a better word — over-perceiving or misunderstanding the impact platforms like Facebook and Twitter had that November 4th.
Instead, as US News reports, the study suggests that social media only had a small influence on the accuracy of a person’s beliefs that year. The report doesn’t seem to suggest or support that social media platforms were swaying opinions and changing votes with the misinformation posted to them, and were the sole reason why the election that year turned out the way that they did.
Bearing this in mind, it could be more accurate to say that the political misinformation spread through social media sites only serves to reinforce beliefs voters hold going into Election Day, not outright change them and therefore change votes.
In other words, back in 2016 — if someone already believed that Hillary Clinton was Anti-Christ incarnate that went around killing her political adversaries from the shadows, or that Donald Trump was Anti-Christ incarnate that holed up in hotel rooms with women of ill-repute from the same country that funneled money into his election — whatever “information” users were reading on social media users themselves were digging it up to reinforce whatever they wanted to believe about that year’s candidates. Social media sites were mainly only hosting that content for users to find, and they weren’t necessarily consciously pushing it out to users to misinform them and sway the outcome in favor of one candidate over the other.
This, nevertheless, highlights a pretty serious issue that social media has, which is that it does allow for people to more easily exist within self-constructed bubbles where the “information” and “news” they receive only suits whatever beliefs they want to hold. This, obviously, does nothing to create well-informed voters needed for proper and healthy democracy.
But — and the PLOS One study seems to support this notion, too — these bubbles mainly exist because of public distrust of politicians and the media; social media isn’t the root cause. Filtering whatever information flows in and out of social media alone won’t be enough to burst those personal bubbles. Public opinion of the media and politics needs to improve, too, regardless if public opinion is totally in step with reality.
Other studies of the 2016 election have come before Garrett’s and seem to reach similar conclusions. That said, as NPR pointed out last year, regardless if social media sites and misinformation didn’t do as much to change votes as we tend to think, it still doesn’t mean that it doesn’t have a negative impact on democracy.
We’re well familiar with the bad and the ugly and whatever else now, I suppose. So what about the good? Is there any good? Well, yeah, kind of.
Social media, in the past, has proven that it does possess the potential to help democracy. Best example of that? The 2011 Arab Spring protests.
What were the Arab Spring protests? The 2011 protests can be traced back to a man named Wael Ghonim, a product manager for Google in the UAE, using Facebook. The Facebook page “We Are All Khaled Said” was created by Ghonim in 2010, and centered around speaking out against the treatment of a young man by Egyptian police that cumulated in his death.
The page would grow beyond its original intent and spark a political movement within Egypt. The Facebook page would prove useful in helping those who would take part in the protests connect and organize. Corrupt Egyptian president Hosni Mubarak stepping down from power in 2011 happened in large part thanks to the protests that were organized using Ghonim’s Facebook page.
I suppose someone could argue that the formation of the Arab Spring protests could serve as an example of what could happen if people who have limited their exposure to whatever information best suits their “bubbles of reality” combine their individual bubbles into one larger bubble. Perhaps that might be the case now.
But back in the day when Ghonim’s Facebook page was created and the Arab Spring protests came about, I feel that mainstream social media platforms, at the very least, weren’t being used to spread misinformation about to reinforce incorrect political opinions. Maybe here is where the proper filtering of information and content on social media could correct something.
Social media may or may not have the power we think it does to change votes, but it does have the potential to connect people and unite them for positive political change. Social media, in the grand scheme of things, is a relatively new thing in world of communication tech. When we figure out how to iron out all of the kinks, I can’t imagine that it could still be that bad for democracy. Now Olive Garden, on the other hand...
Sources: Forbes, CNN, Vox, US News, Marketplace, NPR, The Guardian
Image: Muhammed Selim Korkutata via Getty Images
0 notes
Text
Never Mind the Politics, Rural Broadband Internet Is Important

Lexington; Berea and Richmond; Bowling Green; Hyden — during my time on Earth mostly spent confined to the USA’s Grand Ol’ Commonwealth of Kentucky, these are all of the towns and cities that I’ve lived in. My moving from place to place means that I’ve lived in rural country and mountain homes, and dwelled in urban apartments.
Now, I’m not here to go on about all the ways urban environments are different from rural ones, or which one I think is better, or whatever. I’ll beat that dead horse extra dead some other time. I titled my latest bout of rambling with something mentioning broadband internet... so that’s what, uh, I guess I’m here to ramble on about.
Click through the jump for more rambling.
At the moment, I’m living in the Hyden, Kentucky area, where I was also originally born. (No, I’m not staying out here much longer, if you were wondering.) Technically, I’m located in a small community roughly 15 minutes south of Hyden, but to keep things simple, let’s just say I live in Hyden just the same.
Care to take a guess what my internet speed is out here? I’ll give you a moment...
youtube
...Alright, yeah. I think we’re done with that.
So, according to a simple Google Internet Speed, my speed out here is...
Yeah. The download stream speed isn’t even 15 mbps. (The upload stream speed is so pathetic it isn’t worth mentioning.) The connection is enough to reliably handle a television set in the living room streaming god-knows-what from YouTube, while also allowing me to carry out some light online-only work.
But it’s also connection that can be bogged down if, say, multiple devices are hooked up streaming videos, one device is hooked up trying to play a game, and another is trying to upload something. If all of that goes on during peak hours, it’s even worse.
Some food for thought: This is the speed that the area I’m in is getting after the local ISP claimed they upgraded some equipment out here. Before that, the speeds were hardly 5 mbps, and would go out if a fly farted in France. It would also struggle to support low-resolution video streaming.
Video conference calls? Yeah, forget it. Gaming? HA! Uploading a one minute — and I mean only one minute, not ten, twenty, sixty or whatever — video clip to the internet...?
More food for thought: The speeds ISPs offer in Louisville, for example, can reach as high as 1 gbps, if a customer chooses to pony up and spend the beaucoup bucks on a fiber connection.
A 1 gbps connection speed is the same as a 1,000 mbps speed; there is 1000 megabits (the “m” in mbps) in 1 gigabit (the “g” in gbps). So — if I do my math right — that speed through fiber is roughly 66 times faster than what I’m getting out here.
If the internet speeds customers can have in Louisville could be likened to a Bugatti Veyron, then the speed I get, in comparison, would be like a greasy Nissan NV200 work van. Yes, one is roughly 66 percent slower than the other (again, provided I did all my maths correctly).
When US Congresswoman Abby Finkenauer and Congressman John Joyce ran a opinion piece in the Des Moines Register earlier this month, stressing the importance of broadband internet access for rural America, yeah, sure, I can see how someone could argue that it was about 500 words worth of two politicians pandering to their voter base. Finkenauer is from Iowa, Joyce from Pennsylvania. Both states obviously have their fair share of rural areas, and therefore rural voters.
But if you quickly study the actual Congressional districts that Finkenauer and Joyce represent, you’ll learn that Joyce’s — Pennsylvania’s 13th district — is overwhelmingly urban. Finkenauer’s district — Iowa’s 1st — is also more urban than it is rural. So was that article really pandering to the voters who sent them to Washington? Factually, not so much.
These two, instead, seem to have taken up the issue of rural broadband internet access because of their ties to the Rural Development, Agriculture, Trade and Entrepreneurship subcommittee of the US House of Representatives. Finkenauer is the chairwoman of the subcommittee, and Joyce is a ranking member.
The subcommittee is part of the House Small Business Committee. And it does make sense that the subcommittee would be bullish on the issue of rural broadband internet access. So never mind the politics, then.
In many rural areas and communities, the people living there are more likely to be dependent on small local businesses. Big name chains are less likely to set up shop in these rural areas.
In Hyden, for example, your “big names” are mostly limited to Dollar General and Family Dollar, Save-a-Lot, Rite Aid, Advance Auto Parts, Dairy Queen, Hardee’s and Subway. That's really not a lot. And if there’s a gap somewhere those names aren’t covering, it’s up to the few local businesses that are remaining out here to cover them.
Small businesses need reliable and modern internet access to more efficiently run their day-to-day operations and better help customers. Not having this access makes those two priorities so much more difficult. It doesn’t take much to think of all of the ways that this is the case. Just imagine what it would be like having to order supplies over the phone — having to wade through menus, hoping to speak to someone — versus just clicking a few things on a website or selecting an option or two on a smartphone app.
Of course, you could argue that upgrading online infrastructure out here could also hurt those small business; better infrastructure in general could help attract more big name chains, which could run them out of business. But that’s a fear that I feel is somewhat unfounded. If a small business has a loyal customer base and knows how to compete, more big names showing up in rural areas shouldn’t pose that much of a threat, but I digress.
Outside of small businesses, people living in rural areas also stand to benefit from better internet access. Reliability can be an issue with internet connections in those areas — and I know that first hand. If someone wants to pursue higher education online, for example, that can be a very unwanted obstacle. Taking tests can be stressful enough; having the possibility of the internet going out for no good reason looming over your head while doing so only adds to that stress.
Honestly, the fact that America’s infrastructure in rural areas is so lacking is an embarrassment. As a nation, we like to thump our chests and believe we’re the “bestest-country-of-all-time-evar!!!!111!!” But we come up short on so much, and other countries regularly eclipse us. There has to come a point where we shut up and stop falsely boasting about ourselves and make this place as good as we actually think it is. And fixing the infrastructure in rural places has to be one of those things we do in order to make it so.
Sources: Des Moines Register, Broadbandsearch.net, Finkenauer.House.gov, JohnJoyce.House.gov, SmallBusiness.House.gov
Main Image: Florian Gaertner via Getty Images via Endgaget
0 notes
Text
Physical Books or eBooks? Why Not Both?

See that slightly blurry iPhone photo above? Yup. Those are all my books. And, no, despite what the photo would lead you to believe, those aren’t all of the books I own.
The weird bookcase I’m using right now was designed to sit in the corner of a room, which means it is deeper than it is wide. So there’s actually another row of books behind all of those. Oh, and the bottom half of the bookcase that you can’t see, that’s behind a door, yeah... it’s full of books, too.
Then there are three additional plastic totes in my closet, each one full of even more books and magazines. So, also no, despite what you see, half of my personal library was not written by Stephen King, Isaac Asimov and Chuck Palahniuk. (Not that there would be anything really wrong with that.)
Although, sure, I fail to sympathize with tabloid newspapers that go out of print because they can’t into blogging, suffice to say, I actually do enjoy me some printed media. But also, with everything said — and this probably doesn't come as a shock — I don’t think I would necessarily say I always prefer it over eBooks or other digital media.
You know the drill: Click through the jump and I’ll tell you more.
Take a quick guess which format outsold the other last year: Printed books or eBooks?
...Made your guess? Is that your final answer? Alrighty.
If you guessed “printed books” without skipping down to this line like a dirty stinkin’ cheater, you get an “A” for this week’s pop blog quiz. Congrats.
Anyway, to be more precise, printed books raked in $22.6 billion big’uns last year versus just $2.04 billion for eBooks, according to CNBC. That’s pretty impressive, considering how it seems we’re constantly told digitalization has disrupted traditional forms of media.
It turns out folks just flat out prefer a printed book to an eBook. And, yeah, I totally get that.
I know I’m not the first person to say this, but there is a romantic, emotional quality — a “magic,” if you will — about using physical media to access the content that is on it. The act of listening to a vinyl record — taking the record out of the sleeve, setting it on a turntable, dropping the needle into the groove, and occasionally pouring your eyes over the gatefold — is a ritual, in a way. It can add a depth to listening to music that just isn’t there on streaming services. When I want to sit down and listen to a whole record from start to finish, that’s really how I want to do it: with a vinyl record, a set of good cans and an old turntable. Spotify, as much as like it and use it, does not and cannot hold a candle up to that.
It’s the much the same for a book. You pull it off of the shelf, turn the pages, and, if the book is new, occasionally stop to take in the smell of the paper and ink. Engaging with the medium helps me engage with the content. Having a physical, tangible connection to the story helps to keep me invested in reading it.
And reading a vintage book takes that experience to another level of depth entirely.
Reference that blurry photo and find the top shelf of my bookcase. See that book sitting in between The Shining and Silence of the Lambs? That’s a first-edition copy of Hemingway’s For Whom The Bell Tolls, from 1940. I’m not sure if you’ve ever picked up a 79 year-old book to read before, but you’re aware of its age as you turn through the pages. That book is hardly fragile, but you just treat it with a deep sense of reverence as you use it. You respect the fact that it’s managed to survive for so long, and is here and ready to tell you the story printed on its pages.
So reading works of fiction, yeah, I’d rather have a physical book in my hands, for the emotional aspect of it alone. And there are some books that just absolutely demand to be read in physical form.
Mark Z. Danielewski’s House of Leaves and Only Revolutions sit on the second shelf of my bookcase, if you go back and reference that photo. Without giving too much away — and hopefully what I’m about to write makes sense — these books advantageously use the actual medium of a physical book to tell their stories. You have to be able to turn pages quickly, and even turn the book upside or hold it in a strange way in order to understand what might be going on in a given moment or to advance forward in them. Because of that, I would imagine that they would lose a significant amount of the impact they have if they were experienced in a digital format.
But you know, honestly, when it comes to non-fiction books, reference books and textbooks, I’d much rather have the digital versions of those.
eBooks are just flat out easier to use for doing research. Let’s say I’m writing a paper on the impact Adobe Photoshop has had on photography and the media, and I’d like to pull a quote from a textbook to strengthen my work. Well, with an eBook, I can just highlight, copy and paste whatever quote I want from the textbook I’ve selected into my paper. This saves me time from having to type everything out and make sure that I’ve not mistyped anything. (You’d be surprised at how much time this can save writing papers, seriously.)
Most eReaders and eReader programs are also capable of building a search index for the books loaded into them, which means you can also search for certain keywords and even complete phrases. Again, for doing research, this is so much quicker and easier than turning to the index in the back of a textbook and trying to work from there.
Now, I don’t know about you, but I do all my eReadin’ on my iPad. And when I had the chance to snag an Apple Pencil for it for less than $50, I jumped on it. Why does that matter? Well, some eBooks allow you to highlight and write in their margins non-destructively. With my iPad and Apple Pencil, it also really doesn’t feel much different from doing those things with real pens, pencils and highlighters in an actual book. Books that don’t lend me this luxury also allow me to highlight words, phrases and sections and write notes in the book based on what I’ve selected. And being able to write notes in some of my eTextbooks seriously helps me keep track of and retain what I’m supposed to be learning out of them.
When it comes to textbooks, eBooks are also flat out cheaper than their print counterparts. A semester’s worth of textbooks can easily cost hundreds of dollars, sometimes flirting with a healthy grand going print-only. I know I can spend around $100 to $200 if I get them all digitally. And, as an added bonus, I get the books instantly without worrying about one being on back-order.
Now, there are some exceptions to that rule in my personal collection of books. In that photo, I think there’s a copy of Lee Iacocca’s autobiography on the top shelf, and a book about Fender guitars down on another, as well as a copy of Harry Caudill’s Night Comes to the Cumberlands somewhere in the mix.
Aside from the book on Fender guitars, I suppose, I own physical copies of these non-fiction or reference books because they have cultural significance. Night Comes to the Cumberlands, for example, helped spur on Johnson’s War on Poverty and provides a compelling look at the issue of poverty in Eastern Kentucky that, unfortunately, somehow manages to hold relevance today. (It’s too bad its author, Harry Caudill, turned into a complete crock later in life and surrounded himself with total morons, but that’s another story.)
eBooks and similar digital media also have another advantage. Because they are so easy to create (Apple, for example, had a program that made it possible to publish an eBook straight from a user’s Mac), it gives unknown authors a means to get their work out there quickly and usually with significant footing.
Yep, go on back to my bookshelf up there and find that copy of The Martian by Andy Weir sitting on the second shelf. Way before there were physical copies of that book to buy, Weir first published the story on his own blog. Weir chose to do this based on his history with literary agents rejecting his previous work. Early fans of The Martian kept asking Weir to make an eBook version, and he eventually obliged, managing to create one through Amazon’s Kindle program. In just a few months after that, the eBook version of The Martian proved too popular for publishers to ignore, and so the physical version finally followed.
So yeah, when it comes to eBooks and real books, I’m not going to choose one over the other. I mean, it’s not like the question of “boxers or briefs?” (boxers), or “Pespi or Coke?” (Coke). Why choose one over the other anyway, especially in light of everything? Because, really...
Sources: CNBC, Owlcation, Kentucky.com, Nautilus, The Wall Street Journal
0 notes
Text
Violent Trump “Parody” Video Is Best Ignored. But Can We?
A colonoscopy. Prepping for a colonoscopy. The smell of burning trash. The smell of really stagnant water. Ordering anything directly from China via eBay or Amazon, and trying to track that package using China EMS’s rubbish tracking system.
Changing the water pump on a GM 4.2 liter inline-six cylinder engine. Going to change the oil in your car only to discover the last place you let change it rounded the drain plug off.
Removing stripped and/or rust-welded bolts and screws. Drilling out a bolt or screw after the head of said bolt or screw broke off because the wonderful human being who put said fastener back into whatever you’re working on didn’t understand how to line the threads up and, thusly, cross-threaded it when they installed it.
These are a few things I believe I would find vastly more enjoyable than writing an entire blog post about anything related to Donald Trump. But this is a blog, and I guess every blog — even one as horrible and lame as this one — has to have at least one, right?
Follow through the jump and I reckon we’ll talk a sec about a Trump-related video that’s been parading around social media here lately.
Sometime during 2018, the creator of TheGeekzTeam — a pro-Donald Trump video meme channel on YouTube that describes itself as “dedicated to pissing off liberals” — is booting up his video editing software. The creator loads in a scene from 2014′s spy comedy film The Kingsman: The Secret Service.
The specific scene the creator has chosen takes place in a Kentucky church, where a hate group has gathered. The film’s wealthy antagonist — a character named Richmond Valentine, played by Samuel L. Jackson with a lisp — intends on conducting an experiment with the hate group by using ground-breaking SIM card technology he developed. One of the film’s protagonists — a spy named Harry Hart, played by English actor Colin Firth — has traveled to this church in order to figure out what Valentine is trying to accomplish.
In the midst of the gathering, Valentine turns the members of the hate group extremely violent by remotely activating and interacting with the SIM cards he designed, implanted in their necks. Bloody chaos ensues when a member of the hate group pursues Hart as he tries to leave, spewing hateful rhetoric, and Hart shoots the group member.
To create this latest video, the creator pastes the face of Donald Trump over Firth’s. The faces of Trump’s critics and challengers — among them, Vermont senator and Democratic presidential candidate Bernie Sanders and Republican Utah senator Mitt Romney — and the logos of various media groups and social movements are pasted over the faces of members of the hate group that met their bloody fate in the original scene. The hate group’s church is rechristened “The Church of Fake News” via Photoshop magic.
The original scene, you could say, was attempting to make a statement about the foolishness of hate groups and their rhetoric. And as gory and violent as it was, the scene wasn’t necessarily intending to be gory and violent for gore and violence’s sake.
A humorous line delivered by Firth’s character before the bloodbath boils up, and the fact Lynyrd Skynyrd’s immortal Southern rock anthem “Freebird” is blaring nearly every second, definitely underscores the fact the creators of the film wanted the scene to have some degree of comedy about it. I mean, The Kingsman is a comedy spy film, after all.
TheGeekzTeam turned the scene inside out and around on itself, in a way, when it pasted Trump’s face over Firth’s, and pasted faces of those individuals and agencies Trump and his supporters identify as “adversaries” over those of the hate group’s.
It’s hard to say what the creator might have thought about the original scene, though. Maybe they got the original message of the scene, felt it was “lamestream liberal crap,” and felt they were changing it to be more accurate with their edits. Or maybe that’s reading too deep; they could have simply liked the scene for what it was on the surface and wanted to make Trump the hero. Or maybe it was something else...
Whatever the reason, TheGeekzTeam’s video would first be posted to different pro-Trump groups on Reddit, where it would receive enough upvotes to find its way to the front page. It would go on to rack up 3 million views from there. Eventually, the video would be shown in October 2019 as part of a “meme exhibit” at a pro-Trump gathering in Miami at Trump’s resort there. The showing in Miami would give the video a new surge in popularity on social media and subsequent coverage in news media.
Trump, at the time of the video’s second coming, “[had] not yet seen the video” but “strongly [condemned] it” based on “what he had heard,” according to Stephane Grisham, who currently serves as White House Press Secretary, a job that currently ties with McDonald’s frycook for “position with highest turnover rate in America.”
So we’re all up to speed on the video, yeah? Let’s change gears now and think about who created the video for a moment. Remember what I mentioned early about TheGeekzTeam’s “about” section on YouTube? It proudly proclaims it is “dedicated to pissing off liberals and encouraging Trump supporters.”
That makes what TheGeekzTeam is really all about pretty transparent, I think: TheGeekzTeam is a troll, wanting to flare up nostrils on the faces of those “hypocrites on the left” while — regardless if they actually support Trump or not — also getting a rise out of Trump supporters. Now, I’m not saying what they’re doing is “smart” or that they’re smart, but you’d have to wonder about yourself if you totally bought into it.
There’s an old adage that’s been going around internet since, like, the days of Ancient Rome and Egypt: “Don’t feed the troll.” With millions of views on YouTube and other social media outlets, mainstream news coverage, outrage coming from various places, and hundreds upon hundreds of YouTube comments from Trump supporters encouraging the channel owner to give “the left hell,” darn near everyone has fed this one. And, mother of God, is he fatttttt. And there ain’t no Weight Watchers program out there capable of trimming that wad of mass down to something remotely healthy again.
When all of the internet — left, right, liberal, conservative, etc., etc. — feeds a troll, what does it encourage a troll to do? Troll some more, and ‘til they can’t no more. No doubt, the exposure that has been afforded to TheGeekzTeam thanks in part to mainstream coverage will only encourage them to create more mind-numbingly stupid content in the future.
The best course of action, as tough as it might be, is to ignore the content trolls create in the hopes that it will discourage them from creating anything else like it further. Of course, advising for that course of action feeds into the concern that ignoring this content will allow for it proliferate unchecked. Well, alright then; if you have to say something to a troll, you better learn how to shut them down by sharpening up your own trolling skills.
Yelling about your outrage and anger sure as shoot isn’t going to work. That’s what a troll wants, remember; to see you loose your head and get upset or act loud and foolish. Be smarter. Learn how to fight fire with fire.
Well... I guess I’m done with my obligatory Trump-related blog post. If you’ll excuse me, I’m going to head on over to China EMS’s website and, for the millionth time, not track some crap I bought from eBay the other day.
Sources (that are likely to piss off supporters of Pro-Trump YouTube channels): India Times, NBC News, NPR, USA Today, The Washington Post
Main image is a screen grab taken from the original scene in The Kingsman: The Secret Service (2014).
0 notes
Text
Was News About Malaysian Youth’s Death All That Shocking?

A bit of shocking and sad news briefly swept through the world last December. A 16 year-old Malaysian youth was, unfortunately, fatally electrocuted, apparently from simply wearing headphones plugged into his smartphone.
While it didn't blow up into a super-duper major news story, numerous outlets still carried it, from Teen Vogue to Vice to Yahoo News. The story originated from Malaysian newspaper New Straits Times.
Of course, because Teen Vogue and Vice aggregated it, it didn’t take long for it to find its way onto social media platforms, where it flourished like a fly in a McDonald’s bathroom (or like how bad ledes and headlines flourish around here). It certainly seems that, from here, it did have enough people wondering about just how safe headphones were to use, especially in combination with a smartphone.
Well, worry not: Your headphones are safe to use with your smarty-pants phone. There might be one small and rare exception. Follow through the jump, listen close, and I’ll tell you more.
As general protocol these days, I snoop around the internet whenever I’m forwarded a story that feels like it might be dubious to some degree. And I while I can’t say the story is untrustworthy — I mean, most of the outlets that repeated the story were hardly Pulitzer-winning, but they also weren’t Breitbart either — I couldn’t shake the feeling that something about the story was just off, for lack of a better word.
I suppose my skepticism was rooted in the very simple-minded fact that I’ve used headphones plugged into a smartphone before (well, before I bought an iPhone 7, that is) and suffered no ill consequences. Which isn’t to say I think that it’s impossible, but still. I decided to do some digging around using Google in the hopes of reading about other cases where people were electrocuted while using headphones plugged into a smartphone.
As I started my Googling, I wasn’t having that much luck with finding those other cases, beyond a few mentioned in Vice’s coverage of the story (which Teen Vogue also repeated). Then I ran into a familiar and handy internet friend in my searching: Snopes.
Snopes, in case you didn’t know, usually investigates into great detail popular stories that are passed around the internet in the name of deeming them real, fake or whatever could fall in between. Discovering that Snopes had been previously investigating this story did nothing to lower my raised eyebrow, so I aborted my original mission and clicked the link.
So what did Snopes find? Well, after their due diligence they couldn’t really validate the claim the story made, that wearing headphones alone electrocuted the teenager and caused them to die. In turn, they slapped an “unproven” rating on the story. In other words, the story wasn’t fake news, but maybe taking it with a grain of salt wouldn’t be a bad thing, either.
Specifically, Snopes wrote (emphasis mine):
“Indeed, reports of people dying from being electrocuted by their headphones pop up every so often, but the true causes of these deaths remain unclear. As in the case of Mohd Aidi Azzhar Zahrin, the details from news reports were not specific, and we haven’t yet seen clear evidence that his earphones caused the youngster’s tragic passing.”
[...]
“The claim that the teenager was electrocuted by the earphones he was wearing rests on a comment offered by district police chief Deputy Superintendent Anuar Bakri Abdul Salam to a local reporter, but the local versions of the story do not maintain it was the teen’s earphones specifically that caused the electrocution death.”
The websites editors then go on to say (emphasis mine):
“In regards to this story, we consulted a half-dozen emergency room (ER) physicians (members of the advocacy organization American College of Emergency Physicians, or ACEP), and none of them considered it likely that a person could die from an electrical shock received via earphones plugged into a cell phone, although some of them said more details about the incident were needed.
[...]
Los Angeles-area ER doctor Patrick Cichon said that amperage (“current flow”) is more lethal than voltage (“current potential”), but both amperage and voltage are below lethal thresholds in typical earphones.
[...]
Detroit physician Brad Uren told us he had seen the story in the wild before we contacted him, and he doubted the veracity of the reporting from get-go. For such a death to occur, he averred, it would require a number of unlikely factors to coalesce into a deadly worst-case scenario.”
In other words, the voltage running through a set of headphones, is so low that getting electrocuted enough to die is extremely unlikely. The only way for it to happen would be if a “perfect storm” of rare conditions were to occur.
That said, an autopsy did confirm the Malaysian teen did die as a result from electrocution. It did not say the headphones themselves were at fault.
There is only one other thing outside of the district police chief’s comment that suggests that the headphones might be to blame: Photos accompanying both New Straits Times and Yahoo News’ coverage of the story.
I won’t post the photos (actually NST itself censors them), but in them, you can see one side of the poor kid’s head in the background, blood pouring out of his ear while the gloved hand of who I assume to be a medical professional holds a white earphone. On the outside casing of the earphone, below the cushioned end, is a black mark. But, to play Devil’s advocate here, that could be as much be the result of electricity charring the casing as the general wear and tear that comes with regular use.
Revisiting the cases that Vice shared in their coverage of the story, it wasn’t hard to pick up on something interesting and obvious that occurred in most them: The headphones were plugged into smartphones that were all also charging. The smartphones subsequently failed when they were plugged into an outlet using a non-original charger, when the power surged, or plugged into an outlet that might have been using suspect wiring.
Although I can’t find anything that confirms it, I wonder if that wasn’t what actually happened with the poor kid in Malaysia. Maybe he was using a cheap, third-party charger with his smartphone that wasn't built to the specifications of the original. Maybe his house had some faulty wiring (its really not that uncommon, especially with older homes, even here in the States). Maybe the power suddenly surged and unfortunately found its way to his phone’s charger and then on up from there to him. The autopsy would seem to agree with this being the cause far more than it would with the headphones and smartphone alone being the issue. In this case, really, the headphones just so happened to be at the end of the chain that lead to his unfortunate demise.
Footnote: For what its worth, for those of you curious, I do use headphones on a regular basis, when I’m listening to music and especially if I’m trying to record music. I’m a fan of AKG cans for recording, and I try to use them for general listening, too. The rare occasion is when I might be using headphones in combination with my smartphone, an iPhone that lacks a headphone jack. Rather than lose the dinky little 3.5mm adapter dongle that shipped with the phone, I’ll use wireless headphones that operate via Bluetooth. The wireless Bluetooth headphones I use were made by Phillips and, eh, they’re not bad considered. Fingers crossed, I don’t think I’ll find myself shocked to death anytime soon from using them. That said, most everything I plug my headphones into are also always plugged into surge protectors, sooooo...
Sources: New Straits Times, Yahoo News, Teen Vogue, Snopes, Vice
Lede Image via Pexels.com
0 notes
Text
Without Surprise, Brands and Retailers Still Should Focus On Courting Millennial Customers

Ready to read something that will absolutely, positively, certainly, definitely shake you right to your very core? In the world of low-prices and cheap imported goods that is retail, it turns out that brands and retailers still need to figure out ways to appeal to the demographic that has never failed to be a buzzword for roughly the last 10 years and change.
What’s that? No, it isn’t Generation Z. Of course, I’m talkin’ ‘bout my generation here — Millennials.
All hyperbolic sarcasm aside, it’s actually a downer that retailers still need to think of strategies on how to best stick their hands down our pockets. If you hit “read more” (which you were going to do anyway, right?), I’ll tell you why.
Oh... you actually followed through the jump?
Well I’ll be durned. On with it then, I s’pose.
So, first, a little recap: Back on October 4th, the blog for Multichannel Merchant.com ran a piece written by Ingenico Group’s director for go-to-market strategies, Mark Bunney. Titled “Millennials, Not Gen Z, Are Still Defining the Future of Consumer Loyalty,” Bunney discussed ways in which brands and retailers might try to appeal to Millennials going forward, even as they’re trying to focus on Generation Z.
That’s all well and good, I guess, but there’s something Bunney specifically said in the opening paragraph of the article that certainly caused an eyebrow to raise. And I quote-eth (emphasis in bold-eth mine):
Gen Z has dominated headlines lately as brands work through how to market to these emerging buyers. But why are we focused so heavily on these new consumers when we’ve barely started feeling comfortable with millennials? Looking ahead is important, but in the case of Gen Z, brands are just chasing the shiny new object. Millennials now have the buying power to cause seismic sales shifts for brands and, most importantly, have changed how we’ll understand loyalty forever.
In other words, Bunney is trying to tell brands and retailers to slow their roll a bit. A significant percent of Gen-Z’ers might be finally coming of age, yeah, and you’re going to want to reach out to them and try and make them life-long customers, sure. But us filthy Gen-Y’ers finally have us all some of that there “buying power” (industry-speak for credit and cash) now and we’re gonna wanna use it.
Bunney then went on to write somewhere around 600 words that just, well, made me feel icky about myself. And, you know, I really wanted to use this space to rant a bit about that. I wanted to rant about how I felt he was grossly over-generalizing how a generation of around 95 million people likes to shop, and trot out all sorts of articles from around the webs that would make him look silly.
But... I can’t.
It’s not because I don’t think I can find anything to the contrary of what he wrote. Nope. It’s because I really can’t move beyond what he wrote in his opening paragraph.
Bunney, if you’ll recall, works for Ingenico Group. Ingenico Group is a pretty significant company in the world of retail, in case you didn’t know. You know those machines, those “readers” you swipe and/or insert your debit and/or credit card into at the store? Yeah, they make those (aggravating piles of crap you’re forced to use when a store fails to realize it’s almost 2020 and we have this thing called “Apple Pay” and “Android Pay” now that works from the darn smartphone we carry everywhere with us and just getting on board with that would save everyone so much headache when the system takes a big dirt nap, or you had to spend the extra spot of cash you keep on you). Bunney works for them.
Bunney is a big wig at a big name in the world of retail then, to put it another way. And as much as I think I could be the David to Mr. Bunney’s Goliath and challenge, if not slay how he thinks I like to shop and buy the junk I buy, I know there's no challenging anything he wrote in that opening paragraph. If anything, that opening paragraph is probably one of the single most depressing things I’ve encountered all week, and I’ve been binge watching OG Unsolved Mysteries, which is sadder than Jason Alexander at the Emmies when it isn't busy being max-creepy.
And, no, it isn’t depressing because I can’t argue with what he wrote. I mean, that’s a bummer, sure, but no.
The youngest Millennial is around 25 years-old, from what I gather. More and more of us are actually, truly getting old. And yet now, only now, does someone in the retail industry recognize the fact that we have some actual “buying power.”
Between spending what should have been the brightest days of our youth in the midst of the Great Recession’s darkness, and going through college accumulating enough student loan debt that roughly equals half a stinkin’ mortgage, is it any wonder that only now do some of us have some credit and cash to spend; now that we’re so old that retailers think its time they moved on to attracting the generation after us and need one of their own to remind them to slow down and go back to us? Really, you know, I’m surprised more than — I dunno? — maybe two or three of us apparently have a significant amount of money lying around somewhere now, let alone actually want to spend it.
Getting old sucks, as they say. Being a Millennial and getting old, though? That sucks even worse. Here’s another reason why, as if we needed one.
Sources: Multichannel Merchant, Kasasa.com, Ingenico.com, US News
Image from Forbes, defaced with an Apple Pencil by yours truly
0 notes
Text
Default Autoplay On Mobile YouTube Apps Was Hardly Innovative, Mostly Annoying, Possibly Evil, Completely Money-Driven
Last December, YouTube proudly announced to the world that it was going to allow a feature from its paid Premium subscription service to wander its way down to lowly free users.
Was it finally going to extend background playback to free mobile users? Every free YouTube user everywhere on Earth was begging and pleading for that Premium feature. Oh how wonderful it would be to play YouTube’s vast library of music in the background using its mobile app while reading or surfing the web on the go. Spotify extends that luxury to its free users, albeit with ads, so why couldn’t bigger and more profitable YouTube do something similar?
Of course, they didn’t do that. Nope; YouTube, and parent company Google, decided instead to allow free mobile users to indulge in another Premium feature: Autoplay on Home.
Free YouTube mobile users who wanted background playback collectively reacted like this:
So, what is Autoplay on Home? Autoplay on Home is a feature where videos that appear on the Home tab of the YouTube mobile app silently and automagically play by default when it is opened.
The whole point of the feature appears to be rather good. With autoplay, as YouTube itself likes to point out, users are supposedly given a means to preview suggested videos and decide whether or not from its worth viewing based on what they see from that preview.
Premium users, YouTube claimed, loved Autoplay on Home. And indeed, when you scrolled down and broosed through the comments section of YouTube’s video officially announcing the feature trickling down to free users, there were plenty of comments left by those who seemed pleased by the decision.
“Such an awesome feature,” wrote user Mishovy.
“I love this feature. There are many times I [would] like to see a preview of what [is] coming. Nice job,” said another user, Michael Daniels.
Other commenters around the internet, however, weren’t so hyped about the change.
“Autoplay is the bane of my existence,” commenter MysteryMii wrote on The Verge’s article about the YouTube feature.
“Holy s*** I hate autoplay,” wrote another, Verge reader Seigmoraig.
“Google still not caring about anyone’s battery life I see,” user MaDBoOmAh added.
The comments section of YouTube’s announcement video also found its share of haters leaving their thoughts for all the world to read.
“I hate autoplay,” YouTube user Happy Undertaker wrote.
TheChipmunk2008 also chimed in with: “Remove this. It's evil. Bastards.”
YouTube user Jack Gordon left this comment on the video: “Why can't I turn this off? I don't like home screen previews and now I am forced to use them. Is there any reason for this option being removed? I don't want to waste data on a bunch of videos I'm not even watching nor do I want them in my watch history.”
From what I gather plowing through online comments all over the interwebs, user appreciation for videos that autoplay isn’t exactly greater than user disgust. And I totally get that.
When I’m on a website or a platform where every video automatically turns itself on — and I’m bombarded with a constant stream of moving images and occasional noise (they’re not always on mute by default) — the first thing I want to do is start shutting things down and up, as soon as I can, if not leave and go somewhere else entirely.
The criticism expressed in the comments above that it burns up mobile data and, therefore, battery life is also quite valid.
Speaking from experience, I know that in the past I’ve went well past my smartphone’s data plan simply watching videos on YouTube and Facebook when there wasn’t a WiFi network around to join. And even with a relatively healthy battery, I’ve seen my battery life drop like a stone thrown out the window of a 120 story skyscraper doing those activities.
Having videos automatically play? Yeah, just go on and sign me up for Verizon annoying the crap out of me with texts saying “You have exceeded your plan’s monthly data limit. Purchase more, blah, blah, blah,” while my battery forever stays in Low Power Mode, forever plugged into a wall charging.
That all said, YouTube does allow mobile users who wish to conserve data and battery life to turn the autoplay feature off. You dig through your account settings a bit, find the Autoplay on Home section and choose how the feature should work, if at all.
While doing some research for this post, I also noticed something interesting that only seems to confirm, for me, the true popularity of autoplay on videos.
A simple Google search for “YouTube Autoplay Videos” and “YouTube Mobile Autoplay On Home” turned up quite a number of articles containing instructions for how to disable the autoplay feature, far more than news coverage about the feature, anyway. And while most of the instructions on disabling it were from tech outlets like CNET, one was actually from AdWeek.
I found that interesting, that an outlet like AdWeek would publish instructions on how to disable something that you’d expect its core audience to appreciate. Then I stumbled upon an article from The New York Times that seemed to shed light on why advertisers might not be so hot about autoplay.
Brian X. Chen, who wrote the article, spoke with the chief executive of Simulmedia, Dave Morgan. In the article, Morgan basically told Chen that autoplay is a huge reason why video advertising has become a poor experience for users. A poor user experience obviously doesn’t spell out great things usually. “I think we’ve ended up in a really crappy user experience right now with video advertising. Video has been pushed into every user experience whether or not it fits, because it’s a way to make more money,” Morgan said.
If autoplay is so crappy to the point where not even advertisers seem to think much of it, why isn’t it going away then? Well, Morgan flat out told us one major reason why at the end of that quote: It makes... wait for it...
MONEY.
Little surprise there, right?
Exactly how much more money though? Speaking to Chen, Morgan estimated that video ads generate 20 to 50 times more revenue than display ads. And, of course, as Morgan also pointed out to Chen, the best (and likely easiest) way to cash in was to make the videos play automatically.
So one reason why YouTube was so bullish about Autoplay on Home becoming a regular feature for all users is pretty clear, then. It really wasn’t about giving users some sort of means to screen content. Rather simply, it helps earn additional money for the platform.
Really, the fact Autoplay on Home was originally a feature only for paying Premium users is a slap in the face to anyone who was shelling out $11.99 a month for that service. I guess making nearly $12 a month from a subscription service just wasn’t enough dough, so YouTube had to lather up and slap on a frothy coat of autoplaying videos on initial Premium payers, too.
It would also seem that whether or not a user is going to view an autoplaying video is less relevant to that video making money. This might seem like a good thing for YouTube content creators, and I suppose it is, but once again it doesn’t spell out anything good for users. Rather, it seems that this could allow for subpar and questionable content that doesn’t deserve to generate money the opportunity to do so and proliferate on the platform.
It should be pointed out that YouTube is hardly the first platform to employ autoplay for videos. Facebook — specifically the Facebook Watch end of things, which aspires to compete with YouTube and other social media platforms that center around user created video content — and Twitter were using autoplay on videos uploaded to their service long before YouTube decided it was something the whole world wanted and needed, and then various websites were using autoplay on video ads long before that.
In the future, we should expect that more platforms decide to switch on autoplay for all their videos, despite its seemingly overwhelming unpopularity.
In his article for NYT, Chen also spoke to Taylor Wiegert, who works as a director of user experience strategy for The Martin Agency. The ad agency director explained to him why autoplay is becoming more common, saying “tech platforms [... like] autoplay videos [...] because they [are] effective at getting people to stick around on their sites.”
It turns out, as unpopular as it is, autoplay is very effective at keeping users glued to whatever platform or website they’re visiting. Again, one more reason why it isn’t going anywhere, as bad as we might want it to go straight to a lake of fire and fry. So how is something so hated so good at keeping people engaged?
In an 2016 article written for Medium’s Thrive Global sub-site, executive director for the Center for Humane Technology and former Google Design Ethicist Tristan Harris described 10 different ways that technology and web design is designed to, as he puts it, hijack your mind. Autoplaying videos is one of those hijacks he mentions, comparing it to a “bottomless” bowl of soup that automatically refills when empty.
“Another way to hijack people is to keep them consuming things, even when they aren’t hungry anymore,” Harris wrote. “Tech companies exploit the same principle[... This is] why video and social media sites like Netflix, YouTube or Facebook autoplay the next video after a countdown instead of waiting for you to make a conscious choice (in case you won’t). A huge portion of traffic on these websites is driven by autoplaying the next thing.”
This is an example of what some refer to as a “dark pattern” in web design. These dark patterns are designed to manipulate and even deceive users through taking advantage of how they behave. And as Owen Williams pointed out in an article for TheNextWeb.com, there are hundreds of these dark patterns beyond autoplay, and they all work “extremely well.”
Here we can see another reason why YouTube wanted Autoplay on Home for all: To keep people stuck there, consuming content, as endlessly as possible.
Autoplay — if I can be be so blunt — really sucks. It also seems TheChipmunk2008 might have been right in the YouTube comment they had left, after all: Autoplay is also kind of evil. But, you know, I can think of one way I can tolerate it.
Although I’d rather YouTube ditch the Autoplay on Home crap on its mobile apps, I do wish it was employed on its desktop site, with a catch mind you. Let’s say I’m on the desktop version and I hover my cursor over the thumbnail of a certain video (and only one certain video! Not a bunch, just one!). If after five seconds I get an autoplaying preview of that video, then I can say that I would agree with YouTube and feel that I actually had a feature for once that would allow me to pre-screen content on its platform.
But, you know what else? I expect them to do that as soon as they decide to make background playback available to everyone.
Sources: AdWeek, Medium (Thrive Global), The New York Times, TheNextWeb.com, TechCrunch.com, YouTube, The Verge
0 notes
Text
Digital Filmmaking Is The Way To Go, But The Debate Has Hardly Been “Put To Rest”

Digitalization disrupts and changes almost every industry it touches. Just last week, we read about wifi-powered smartphones shaking up the world of journalism (per the usual) and usurping free newspaper tabloids. (For D.C. Metro passengers... Yeah, who could’ve seen that actually lasting into 2020, let alone 2030, with or without smartphones?)
When digitalization disrupts an industry, it also transforms unassuming industry lifers into grumpy old men and women. This was also evidenced in last week’s ramblings, with Washington Post columnist John Kelly’s piece serving as proof.
So imagine my surprise when I read Charles Matthau’s article for Wired — titled “How Tech Has Shaped Filmmaking: The Film Vs. Digital Debate Is Put To Rest” — only to discover that, unlike the characters his father Walter played in his later years, he isn’t a grumpy old man.
Rather the opposite, he seems to have no issue with evangelizing about the wonders of technology on the film industry. As the title outright states, he even goes so far to say the debate has been closed when it comes to traditional filmmaking methods versus digital approaches.
Matthau isn’t necessarily wrong — and of course, I do mostly agree with what he writes — but his column hardly ends any debate.
For all two of you who followed through the jump, I’m not going to spend as much time on this week’s post as I usually do. (Yeah... Didn’t think you’d be disappointed, either.) Main reason for that: I don’t have that much to disagree with when it comes what Matthau is saying in his Wired column.
Matthau points out that digital filmmaking is cheaper than traditional methods, easier to edit and distribute, and allows for more reliable preservation and storage. That’s all pretty much true.
Take into consideration cost. In the past, a filmmaker could spend thousands upon thousands upon thousands of dollars on cameras and lenses alone. Today, you can still do that if you wish; a single IMAX camera reportedly costs around $16,000 a week just to rent to and $500,000 to buy.
But with the right skills and know how, it’s more than possible to shoot a feature length film using nothing more than the smartphone in your pocket. Director Steven Soderbergh, for example, shot Unsane using nothing more than an iPhone 7 Plus.
As more food for thought: The movie debuted last year, and by that time the 7 Plus was roughly a two-year old design, having also been replaced by the iPhone 8 Plus and iPhone X in 2017 (and then the iPhone XR, XS and XS Max replaced those the year after). This proves you also don’t need the newest and bestest-ever smartphone to complete the task.
Was it the best looking film of all time? Well, I mean, not necessarily. But it also didn’t look bad, either.��Had Soderbergh kept quiet about what he used to film the movie with, I honestly wouldn’t have guessed a smartphone did the heavy lifting.
Matthau also writes that it’s easier to shoot with digital means than traditional, and I suppose that might be more so true for amateurs and intermediate level filmmakers. More experienced filmmakers, I would say, probably and ultimately can work with digital filmmaking means just as easily as they can traditional methods; it boils down to whichever stays within whatever budget they’re working with.
What I wish Matthau would have pointed out is this: Digitalization is also democratization. What I mean by that is, technology makes it easier for anyone with the desire and willpower to take on a project to complete that project.
Specifically, when it comes to filmmaking, anyone with the desire to make a movie can do so on the thinnest of a budget and actually have something presentable when it’s all said and done. Again, remember that the smartphone in your pocket has the capability to shoot a feature-length film as long as you figure out how to use it in that way.
So why didn’t Matthau’s column end the debate on digital versus film? One big reason: His article for Wired was “partnered content.” This alone drew ire and skepticism from rabid luddites in the small and hidden comments section on Wired’s website.
One commenter wrote: “This author has probably been paid by the digital camera makers to write an article full of only holes and complete lack of understanding of the business.” Uh, OK? You realize that the person that wrote this is an actual filmmaker and the son of a well-known actor, right? But I feel this also just goes to show just how much readers trust and approve of “partnered content.”
Really, though, if this article had been written by the most esteemed independent filmmaker on the planet, and it wasn’t “partnered content,” these folks still likely wouldn’t be convinced. And I guess that’s OK. If they want to spend more money and time making their films to some older and romanticized standard, and want to be burdened with the aggravation that comes along with it, I say let ‘em be a bunch of grumpy old men and women.
Sources: Wired, Film Connection, Premium Beat, The Verge
Lede Image: Vancouver Film School via Flickr
1 note
·
View note
Text
Hey, WaPo Express, Maybe Y’all Should’ve Picked Up Your Stinkin’ Phones, Too

Perhaps bidding the world the most bitter of adieus, The Washington Post’s free daily tabloid — the Express — ran the following headline last week on its final cover: “Hope you enjoy your stinkin’ phones.”
Those who chose to set their stinkin’ phone down, pick up a copy of the final Express and turn to the Eye Openers section of the publication were also met with the following:
“In news that scandalized a nation, The Washington Post Express abruptly shut down Thursday, citing falling readership and insufficient revenue. Apparently everyone riding the D.C. Metro now looks at their phones instead of reading print newspapers. Express editors will miss the newspaper and its readers very much. It has been a pleasure and an honor to provide commuters with this daily dose of odd news.”
Uh, wow? Someone needs to order up a large fry, stat, for the WaPo Express crew. (Or, rather, its survivors, considering.) They’re going to need something else to pour their giant mountain of salt over besides miserable D.C. subway passengers.
But honestly, really, it all begs the question: Why didn’t they get with the program, pick up their stinkin’ phones and start doing things the 2019 way? I can only think of one reason why.
So let’s get this out of the way first: Although I once might have aspired to work in journalism, and maintain a great respect for the profession, I also find I’m easily annoyed by publications (and their teams) that choose to whine and complain about Lord Internet, The Great Disruptor and his buddy Prophet Messiah Stinkin’ Smartphone, instead of choosing to adapt to them.
It shouldn't come as any shock, then, that I don’t have any sympathy for the Express’s moaning and bellyaching. My pity well is bone dry.
The simple fact of the matter is this: The Express had a chance to go digital. It could have decided to relaunch as a blog (which — hey! — can be managed from a stinkin’ phone) or at least grabbed its own section on the Post’s website. Instead, it appears it chose to bitterly and foolishly be a print-only affair.
It’s currently September 2019. We’re staring 2020 dead in the eyes now. Love it or hate it, the internet has been around forever now and is here to stay (barring some catastrophic nuclear war or something). Surely the team of journalists, writers and editors working at the Express could see the writing on the wall a great distance away before the decision was made to shut things down.
Why they didn’t choose to transition into another format almost baffles me, almost. See, I also think I might know the reason why they foolishly chose to hang on to their print-only format until their ship sank into the icy Atlantic beyond a penchant for troglodytism. (That’s a fancy way of saying, “Holdin’ on to them good ole days when folks got their news from buying a paper for a whole got-danged nickel.”) If you’ve read this blog before, you should also know by now the reason I’m about to suggest, so get ready for it...
Money.
To explain, I think that at some point the Express indeed knew it had a choice to make when it came to the matter of digitizing itself. And so, like any big-name publication part of a bigger publication, it had one of its accounting bureaucrats sit down and crunch the numbers for doing so. What that accountant came back with probably wasn’t promising.
Ad revenue, the accountant’s findings would have pointed out, for an online-only Express probably would have suffered compared to the print version. Despite the fact readership would have been higher, online ad space goes for less than print ad space. (The Houston Chronicle, for example, points out that one print ad costs around $100 while online ad space can be purchased for half of that per month.) And because online ads also charge per click, advertisers do tend to see them as less valuable than print because reach can be highly variable. (According to Stanley Baran on page 87 of Introduction to Mass Communication: Media Literacy and Culture, anyway.)
It all spells out this: A hypothetical online-only Express could have been, in theory, not much more or maybe even less profitable than the print version.
Now, to be clear, I don’t have a source that proves my little theory beyond the basic facts I shared on how online advertising works versus print advertising. (Which, as we all know, advertising is how an outlet keeps the lights on and its staff paid.) But, still, everything considered, I can see that scenario unfolding so easily that I would be just as surprised to hear it didn’t happen.
Moving on, while I see how a WaPo Express blog could have been less profitable, I still say it would have been better long-term than staying print-only and going the way of free tabloid newspapers for D.C. Metro riders.
Also, you know, considering how the Express seemed to have a taste for snark going by that last cover, maybe it could have also repositioned itself as cooler alternative to the WaPo and expanded beyond D.C. Metro riders to appeal to younger readers. If successful, that could also have helped its finances. In the end, I still say it still had a choice to go digital and have a future.
John Kelly, in a recent op-ed for WaPo, also wrote about the Express’s demise where he whined and moaned about smartphones taking the place of newspapers. Yeah. Know what I gotta say to that?

I mean, if you want me to be brutally honest, that entire piece didn't have much more substance than one of those “back in my day, we drank from a hose and didn’t wear seatbelts” posts you see your Uncle Darryl sharing on Facebook. All I could hear was someone in their late 50s* complaining about how they thought new technology was ruining the world by making things more convenient, and that grates on my nerves just as much as a newspaper crying, well, about the same thing, really.
(*Kelly claims on his Facebook page to have graduated college in 1984, so assuming he was in his early 20s when he graduated, that means he was born sometime in the early 1960s. Anyway, moving on.)
Thinking about it, I’m also sure that if you asked John Kelly to take his phone out of his pocket, it probably wouldn't be the dumbphone he mentions he had about a decade ago in the op-ed. You know, I’d almost bet good money it would be a stinkin’ smartphone.
Sources: The Washington Post, The Washington Post (again), The Houston Chronicle, Twitter, Introduction to Mass Communication: Media Literacy and Culture by Stanley J. Baran.
Lede Image Credit: The Washington Post (...again)
0 notes
Text
The Fellas at the Freakin’ FCC Strike Again: First The Internet, Now Children’s Television

You know, these days especially, it seems that the list of reasons not to like the Federal Communications Commission far outweighs any list of reasons you could come up with in its favor.
Yeah, sure, it might have just earmarked $4.9 billion to be spent on improving broadband internet access in rural areas. But that doesn’t absolve the FCC of the sins it has committed under the stewardship of its current chairman, and one of the most punchable faces in all of human history, Ajit Pai (seen above intolerably drinking out of an oversized mug. No. It’s not funny, Pai. Stop. Just stop. You need to stop it. Get some help. Also, I’m not eating Reece’s cups for a while now, either, so Hershey’s, you can thank the FCC for that.).
In the last few years, the FCC has dealt a horrible blow to net neutrality laws (it also admitted that it lied to the public about being hacked when it began collecting comments about said effrontery to net neutrality) and over the summer, decided to roll back regulations on children’s educational and informational programming.
Make no mistake, what the FCC has done here does nothing to benefit poor, dirty commoners like you and I. So who is benefitting then?
That’s obvious. The beneficiaries are — who else? — big corporations with their bank accounts full o’ beaucoup-bucks.
When the FCC officially rolled back net neutrality regs in the summer of 2018, it was claiming that keeping them was “costly.” Ridding the US of them, it said, would encourage internet service providers to invest money in upgraded equipment, which would in turn increase internet speeds and make the internet here a better thing.
Except, yeah, no. Getting rid of the regs, as it turns out, has done little to encourage ISPs into investing money into better equipment and improving infrastructure. Actually, most of the investing in those areas isn’t coming from ISPs, but the FCC itself (like those billions of dollars mentioned earlier).
In reality, ISPs felt that neutrality laws were too restrictive, specifically when it came down to how they could make another buck. So they all did what a big business does best when it decides it doesn’t like the law: Wave a load of money around and have the law changed.
Why? Well, with net neutrality laws on the books, it meant that ISPs had to treat access to the internet equally. ISPs couldn’t charge higher prices for higher speeds, for starters.
To be more even more specific, it also meant that an ISP like Comcast couldn’t speed up or prioritize access to an internet-based service that it owns, like Hulu, while also slowing down or even blocking access to competing services, like Netflix. It also meant that they couldn’t charge you a fee to restore access to them.
Therefore, the FCC’s regulations were too restrictive when it came to whatever backhanded method an ISP could think of to make a profit. They had to go.
And if you were wondering: Yes, since the repeal, ISPs have been throttling access speeds in the name of higher profits. I’m sure there’s a fire department in Santa Clara, California that can tell you all about what Verizon decided to do in the midst of it fighting life-threatening wildfires around this time last year. This really is the slow beginning to a painful new era for internet access.
So what’s the motivating factor behind the changes the FCC made to regulations regarding broadcasters and children’s programming? If you guessed “profits,” I bet it was because you could smell all of the dirty money being waved around. I mean, how could you not?
Former regulations passed in the 1990s required broadcasters to air at least three hours of children’s programming a week, between 7 am and 10 pm. Those requirements were just as modest today as they were when they were passed.
Broadcasters, however, complained that these regulations weren’t flexible enough in today’s era of cable, satellite and streaming content services. Specifically, the National Association of Broadcasters, in its filing to the FCC, said the rules were “mired in [a] bygone era of appointment viewing.”
The FCC, duly willing to serve America’s corporate elite, changed the regs. They still have to air three hours of children’s programming a week, but they can start an hour earlier now, beginning at 6 am.
Yeah. I’ll ask pretty much the same question Mark Pattinson asked in his piece that Angelus News ran last month: What kid has ever been up that early just to watch television?
Really, the only people who are actually up that early watching television are the same people who probably don’t go to bed until 11 in the morning because they stay up all night wired on a mixture of various substances illicit and not, White Claw, and Mountain Dew brand energy drinks.
Also, the new regulations allow for broadcasters to shift some of the weekly requirement of children’s programming between multiple channels. This sounds like it might not be a big deal until you stop and think about it for a moment.
As Mark Pattinson also pointed out, cable, satellite and streaming services may not carry any of the other channels offered by a broadcaster. Therefore, access to the content on those channels really isn’t possible through those means. Really the only way to receive these channels, such as MeTV and Comet, is through the purchase of a digital TV antenna. (And there’s no guarantee digital antenna users in rural areas will get those channels, either, which opens up another can of worms.)
Clearly, then, the changes aren’t to benefit the public. But broadcasters? Yeah, they will benefit. Freeing up the already modest regs on children’s programming means that broadcasters can, indeed, air other programming that yields greater ad revenue.
Broadcasters argue that they need the additional ad revenue in order to help them remain financially sound in a time where other services and platforms make it tougher for them to earn money. That argument is pretty far from the truth.
CBS, for example, found itself reporting in May that its total revenue rose 11 percent in the first fiscal quarter of the year, and that its ad prices would be increasing starting this fall. Remember: This was in May, during the era of streaming services, and before the FCC lightened regulations on children's programming. The weaker regs and higher prices on ad space mean that this particular broadcaster will likely be bringing in extra cash at the end of the year.
The cost of, then, tomorrow’s higher profits only came at the expense of America’s tomorrow: Its children. Let’s all give thanks to the freakin’ FCC.
Sources: Bloomberg/Getty Images (Lede Image), Angelus News, ARS Technica, Bloomberg, Capital Press, CNBC, Cnet, Digital Trends, Tech Crunch, The Washington Post, The Week
Bloggers Note: I’m going to use the above method of listing my sources until I’m told it’s a bad idea or something. When I list them in a post itself, the links tend to get lost because Tumblr doesn’t underline them for some reason. (Maybe the reason is me? I dunno. Anyway, yeah.)
0 notes
Text
The Diffusion of Innovation Theory in Higher Education: Misused? Misunderstood?
Rev up your time machines and set those flux capacitors to observe the date of Friday, June 29, 2007 — this is the day that, as of the Monday that followed, at least 250,000 people lined up and forked over about $600 (before taxes and fees, and without adjusting for inflation) to own the first-ever iPhone.
Those 250,000 people who purchased the first iPhones over 12 years ago are what’s known as “early adopters.” Surely, you’ve encountered this term before. Coined by communications researcher Everett M. Rogers in his 1962 book “Diffusion of Innovations,” the definition of the term “early adopter” is (mostly, maybe?) self-explanatory. Located in the second sector of the bell curve (seen above) used to illustrate the rate of adoption in the diffusion of innovation theory, early adopters are people who, according to Jennifer Meadows in Communication Technology Update and Fundamentals, “adopt [new ideas and technology] just before the average person” and after innovators.
Before early adopters are the innovators who are the first to take on and accept the risk of failure and adopt new ideas and tech. (In the iPhone’s case, the innovator was Apple.) Following early adopters are the categories of: early majority, late majority and laggards. These groups are, in reality, ways of describing the various degrees in which the general public adopts new ideas and tech. (This further explanation of diffusion theory is based around, once again, information written by Jennifer Meadows in Communication Technology Update and Fundamentals. Just trying to show sources and give credit here without going full on MLA because this is a blog and ain’t nobody got time for that.)
So what does this anecdotal story about the iPhone and early adopters and all of that have to do with the theory and higher education?
Well, little to nothing, I guess. To be honest, after sitting here looking at a blank screen for what felt like forever, I needed something to get the gears turning about this week’s topic, so I spat out those two paragraphs about the iPhone. Then I was going to attempt to connect that story to this week’s topic, but I had yet another and much longer spell of writer’s block, and so here we are and now I have only have so much time to finish writing this.
...Crap. Maybe I’ll work it all in near the end. We’ll see.
Anyway, what I am supposed to be on about again? Oh, right: Higher education and the diffusion of innovation theory. More specifically, I’m supposed to be writing in response to an article centered around those two things published back in May by Inside Higher Ed.
Written by Edward J. Maloney and Joshua Kim, the article titled “The Misuse of the Diffusion of Innovation” argued that diffusion theory was constantly being misused, at least, when it came down to the world of higher education. “Could it be that a simple image, divorced from the nuance and complexity of how Rogers actually wrote about how innovations are diffused within organizations, has clouded our thinking about higher education change for the past few decades?” they wrote. “We think so.”
Reading Maloney and Kim’s article, I first had the impression that their belief that diffusion theory is misused was based around a misunderstanding of the theory. It seemed like they were arguing, at first, that the theory had no business being applied to the world of higher ed. I was going to argue that that was obviously wrong, then I gave it another read and realized that I was the one that didn’t understand.
What Maloney and Kim were actually trying to say is that, although the theory certainly applies to higher education, it makes it seem as if there are too few people trying to be innovators and early adopters and more people resisting new ideas and change, making them fall into the latter categories on the diffusion curve. “The laggards in one area of university innovation efforts may be the early adopters in another,” Maloney and Kim wrote. “The same people will fall all over the innovation adoption curve.”
You know, I think they’re really coming from the right place here. Let’s go back and revisit those early adopters of that original iPhone. While they might have been pretty welcoming of the new tech then, I wonder what other forms of new tech or new ideas they might have been resistant towards? Did they flock to the iPhone but resist social media platforms, like Facebook or MySpace (it was 2007, remember)? Things might be different now, and your iPhone or other smartphone might be the number one way for you to access social media today. But back then, I do remember smartphones were seen more as a tool used by business professionals than laymen like you and I, and social media really was little more than something “young people fool around with.”
(Ha! I worked it in! See? And I bet you doubted me. Of course, I didn’t doubt me.)
In a university setting, those admirable innovators and early adopters can be both professors and students. Maloney and Kim seem to emphasize that the former, rather than the latter, are what mostly makes up those categories in higher ed, and I’m not going to disagree there, either. Mostly, it is going to be professors that will figure out new ways of doing their job, which is mostly teaching (but there are administrative duties, too, that will be subject to the process of change and diffusion theory). Occasionally a student might stumble upon an easier or more efficient way of learning and pass on that observation to their professor. I doubt anyone can really argue that the situation is inverted here.
(Main image credit of diffusion theory curve: Wesley Fryer via Flickr. Other credits are mentioned or linked in this post; just find them. ;))
0 notes
Text
Some Thoughts on the Facebook Journalism Program
Over the course of the summer, whenever I (regrettably) booted up the Facebook app on my iPhone, I noticed that a few particular and rather curious “promoted posts” (Zucc-speak for “ads”) were appearing in my news feed. “Today, Facebook is announcing that we’re launching support for subscriptions in Instant Articles to all eligible publishers,” one began. Another read, “Curious what the six basic questions of storytelling are? Or the seven tips for framing videos and photographs?”
These “promoted posts,” I noticed, were all coming from Facebook’s Journalism Project page. Why Facebook was pushing them out to me, I’m really not sure. Perhaps my area of study in college the last few years made me a target of whatever creepy computer algorithm Facebook uses to push these posts out to users. Or maybe someone with Facebook (like a moderator...? I guess they have those) somehow marked me as a target because they wanted me to know that things were a-changin’ after that crappy-content-reporting spree I went on one time, where I flagged every poorly written bit of clickbait that wandered its trashy way into my feed. Who knows.
The Facebook Journalism Project, if you weren’t already aware, is a program that the social media platform launched in 2017 in an effort to do its part in supporting journalism. Facebook says it’ll do this by working with actual, credible journalists instead of letting computer algorithms alone promote content, and by spending millions of its dollars.
No doubt, at its very root, the Facebook Journalism Project was actually created because of the endless stream of dubious “news articles” that had been polluting the platform since before the last presidential election cycle started. Allowing questionable content to proliferate so easily has done damage to Facebook’s image, and of course Mark Zuckerberg and Co. can’t be having any of that. The Facebook Journalism Project was, yes, also created for other reasons — the social media platform has done its part in making it tougher for outlets that handle real journalism to do their job, and it has been accused of promoting certain media over others — but, at the end of the day, this is damage control with a multi-million dollar sticker price.
If Facebook wants to work with actual journalists, and allow them to promote content — instead of allowing an algorithm to do the work based on a Facebook user’s browsing history and what content they’ve interacted with while logged in — then, by all means, let them do so. Facebook will surely have to pay these journalists, though, which is something they didn’t have to do when they let an algorithm do the work. But I doubt they’ll feel so much as a dull ache in their deep, money-lined pockets, as much as I’m sure they’ll complain.
This isn’t the first time that Facebook has tried to employ actual people to curate the news people read while logged into their platform. The former “trending topics” tab that was done away with last year allowed for human curation, although that effort left some users accusing the social media juggernaut of promoting certain content over others. On the other hand, it has been accused similarly when it lets algorithms do the work, too.
Really, neither approach is perfect; its just that one costs money (people) and the other one costs less money (algorithms). Given the success record both approaches have had for Facebook, and the fact one costs more than the other, it’s probably only a matter of time before they figure out a way to hand the keys mostly back over to the bots again. But maybe that’s where success lies, by letting man and machine work together...
(This post written in response to: https://www.financialexpress.com/industry/technology/what-does-facebooks-plan-to-hire-journalists-mean-for-media-industry/1683063/)
(References: https://www.cjr.org/the_new_gatekeepers/facebook-journalism.php, https://www.theverge.com/2018/6/1/17417428/facebook-trending-topics-being-removed)
0 notes
Text
After a long hiatus, it looks like I’m back here for the time being...? Old entires are up for the time being, so feel free to cringe at my expense until I scrub them or set them to private or something.
I might migrate over to WordPress. Not sure. Figured I would use an account I had lying around first as opposed to starting over from scratch. Let’s see where this goes. (Hopefully it goes down a road that ends with good grades...? And money. Yeah that.) -- GBN 08/21/19
0 notes
Text
Restaurants are basically gas stations for people.
3K notes
·
View notes
Text
Sorry, Peter Thiel: Gawker Isn’t As Dead As You Might’ve Wanted

Have you heard? Gawker Media — the media organization built by British journalist Nick Denton around a gossip and news blog that, over the course of 14 long years, grew to become the online publication everyone loved to hate — is dead.
The company went bankrupt in June, was sold to Univision at auction just last week and its flagship blog shut down yesterday. All because it made enemies out of two men who would eventually work together to punish the media-org in court: noted doo-rag enthusiast and celebrity wrestler Terry Bollea, aka Hulk Hogan, and Silicon Valley billionaire and practicing vampire Peter Thiel.
...Except Gawker isn’t really all that dead.
First, Some Actually, Quite a Lot of History and Another Version of the Truth... Just Go Grab a Drink or Something, OK?
When former Gawker Media blog Valleywag published a piece remarking on the sexual orientation of Pay Pal co-founder and Facebook board member Peter Thiel in 2007, it set in motion a chain of events that would eventually culminate in nothing short of total calamity almost a decade later.
In spite of its rather positive tone about his sexual preference and praise for his smart investment in Facebook, the article titled “Peter Thiel is totally gay, people” had the Silicon Valley entrepreneur so enraged he soon began a covert legal war against Gawker Media that, today, has cost him millions upon millions of dollars. Thiel felt that Gawker publishing the piece was a violation of his privacy. Although his sexual orientation was hardly some dark secret guarded by gun, lock and key — it’s said that Thiel had already come out as gay to those around him as early as 2003 — it was still an element of his private life made known to the general public at a time when he didn’t want it to be.
“He was so paranoid that, when I was looking into the story, a year ago, I got a series of messages relaying the destruction that would rain down on me, and various innocent civilians caught in the crossfire, if a story ever ran,” Gawker founder Nick Denton said in 2007 in a comment he left on the Valleywag story about Thiel.
Why was Thiel so adamant about keeping his sexual orientation unknown to the general public?
It’s possible Thiel was worried about possible blowback from investors, especially those in Saudi Arabia who are said to have little to no tolerance for homosexuality, if and when details of his personal life became widely known.
“Max Levchin, your fellow founder at Paypal, told me back in 2007 you were concerned about the reaction, not in Silicon Valley, but among investors in your hedge fund from less tolerant places such as Saudi Arabia,” Denton wrote in an appeal to Thiel back in May of this year. (Levchin was also the same source who previously warned Denton of Thiel’s wrath.)
And so, Thiel secretly bankrolled lawsuit after lawsuit after lawsuit through the following years, offering financial support to almost anyone seeking to take Gawker Media to a court of law.
Eventually, he would fatefully extend his checkbook to Terry Gene Bollea, better known to the world as WWE pro-wrestler Hulk Hogan, when he took Gawker to court over a sex tape of him it published in 2012. It would be the Thiel-backed Bollea v. Gawker Media case that would send the media-org speeding down the road to a world of misery.
Thiel would go on to say in a New York Times interview that he thought of his legal war against Gawker as “one of my greater philanthropic things that I’ve done.”
But why didn’t Thiel personally take Gawker to court over the piece that ran on Valleywag? That’s obvious: he simply didn’t have a case.
His orientation was already out there, just not to the public-at-large. And Gawker didn’t do anything illegal to obtain that bit of information; it wasn’t as if Gawker Media parked one of its Valleywag editors out in front of the bedroom window of Thiel’s Silicon Valley mansion in a folding lawn chair with a camera and a pen and note pad at arm’s length, waiting for him to slip off to bed with another man. No, someone in Silicon Valley had to have let word slip to a Gawker editor that Thiel was gay and then Gawker decided to publish a story mentioning it.
And nor did publishing it constitute anything illegal. Again, his orientation was already an “open secret,” and there’s nothing illegal about writing about such information.
In other words: Thiel couldn’t handle the fact that, as a wealthy and increasingly prominent public figure, his private life had been treated differently from that of an average and broke no-name citizen.
(Was it fair? When is life ever always fair, day in and out, even for someone wealthy, powerful and influential?)
When you face the facts, it’s easy to realize that, if Gawker didn’t publish a story mentioning Thiel’s sexuality, someone else would have. Such is the cost of obtaining money, status and influence: people want to examine both your public and private life with a fine-toothed comb equally in the hopes that they can understand how you managed to climb so high above the muck... or so that they can bring you back down to it. Perhaps Thiel feared that, too.
So, then, how did Hulk Hogan have a case — a winning case with a staggering $140 million award, at that — and Thiel did not? How did Gawker’s publishing of Hogan’s sex tape in 2012 somehow constitute an invasion of privacy when publishing a piece about Thiel’s sexual orientation in 2007, did not?
Both men are public figures, yes. And, like Thiel, Hogan had been publicly open about his own sex life in the past — actually, it could be said Hogan was obnoxiously vociferous in public about his conquests in the bedroom, a stark contrast to Thiel who tried to be as hush-hush as possible about it.
Truthfully: Hogan didn’t have a case against Gawker, either. But it seems his legal team knew how to manipulate the system in their favor, regardless.
When the case first went to a federal court of law in 2012, Gawker essentially claimed that, because Hogan had made his sex life so publicly open, and had previously claimed that he had never slept with the woman in the tape, the sex tape therefore became public interest and was ultimately newsworthy. (The degree of newsworthiness it held is irrelevant, if you were wondering.) Also, having the First Amendment and previous court cases on your side helps, too.
The verdict in federal court was, rather unsurprisingly then, in Gawker’s favor. But that wasn’t going to be the end of it, no way.
By 2013, Hogan’s viciously persistent Thiel-funded lawyers had taken the case out of federal court to a local court in Florida, where Hogan resides. That’s when the tide finally turned against Gawker.
Hogan’s legal counsel had claimed that, because the tape was recorded without his consent or knowledge, publishing it was a violation of his privacy and was therefore punishable by Florida state law. And indeed, Florida law makes anyone caught publicizing unsavory private information about a person liable for damages. (Needless to say, if Florida’s law were ever to be reviewed by the Supreme Court, it would probably be severely scrutinized, especially its use in a case like Hogan’s.)
It also helped that, in Florida, Hogan is revered and beloved by locals because, well, he himself is a local, too. So, be honest: Who do you think the jury would side with? Certainly not the New York-based gossip blog with a checkered reputation for snarl, snark and controversy, and owned by a rather gently-spoken British man with gleaming teeth colored an impossible white. And it certainly didn’t help Gawker’s poor image with the jury when they were presented with an tasteless tongue-in-cheek statement editor AJ Daulero had made to prosecutors about sex tapes and four year-olds.
So of course the lawyers knew the verdict would likely be in favor of Hogan once they had the case in a Florida court with a homegrown jury rooting for a local hero. And the verdict handed down on March 18, 2016 certainly was in Hogan’s favor, no argument there, to the tune of $140 million.
Hogan and Thiel celebrated their victory. Gawker Media and Nick Denton planned on appealing the decision but it couldn’t happen soon enough, so the company filed for Chapter 11 bankruptcy on June 10, 2016 and went up for sale. On August 16, it was sold to Univision for $135 million, who planned on operating its other properties not tied to the Hogan case. On August 18, Gawker.com announced it would be shut down by the following Monday.
A gawking public would find itself spilt: either it cursed or celebrated Gawker’s punishment and Thiel’s “philantrophy.” Journalists and writers, regardless of how they felt about Gawker’s reputation, wound up finding little to nothing to celebrate and immediately began worrying about what the case could mean for the future of free speech and a free press.
But what happens next? Has Thiel and Hogan’s legal action against Gawker killed the site for good?
Not really.
Thiel Fails, Gawker Prevails
Peter Thiel spent nine years and a known $10 million in his quest to slaughter Gawker from the shadows through the American legal system.
Consider that time and money wasted. Thiel would've been better off if he threw it into a metal barrel and set it on fire. (Or, you know, donated it to individuals far less fortunate than someone like Hulk Hogan.)
Sure, Gawker.com might’ve just joined its spiritual predecessor Spy magazine and a mile-long list of other defunct publications in whatever greasy limbo they wind up in once the money dries up or they lose their ability to say something meaningful. And, sure, it’s unlikely Nick Denton will pursue building another gossip site, not for a long time, if ever. And, yes, Gawker Media has been absorbed into a larger conglomerate and is no longer financially independent.
But step back for a moment and think of the bigger picture.
Gawker had a long 14-year run. During that time it cultivated a whole host of writers and journalists, who are now mostly working for larger mainstream outlets, like the New Yorker or Vanity Fair. When those writers made the jump to those publications, it wouldn’t be a stretch at all to say that they brought a piece of Gawker over with them that, as of right now, is working to influence their tone and content.
For every piece it ran that was beyond reproach — an article that it had to take down which outed a Conde-Nast executive immediately comes to mind — Gawker also published some truly solid works of journalism during its time. It broke news of Toronto mayor Rob Ford’s drug abuse and even reported on Silk Road, the now defunct online black marketplace where users could buy any drug they desired with the same ease of purchase Amazon.com users experience when they buy video games, socks and underwear. Those two stories are just the tip of the iceberg, too.
And Gawker Media was more than just Gawker.com. There were six other very popular blogs it owned — Gizmodo, Jalopnik, Jezebel, Deadspin, Kotaku and Lifehacker — not to mention a handful of other properties it had spun off or sold to other media organizations throughout the years, which includes Consumerist. It’s six sister publications will live on after its death thanks to Univision’s purchase, and Univision has already said they don’t plan on changing the tone those blogs shared with Gawker.
Gawker is leaving behind a huge legacy here that will continue to influence the media long after your web browser directs you to an empty space where the blog used to be. And, on that note, archivists are working to make sure that, if the site does drop entirely out of existence, a backup of the site will remain for curious minds in the future to study.
But Gawker’s second act? Well, Nick Denton himself probably wouldn’t have expected it to happen so soon, but other websites are already looking to fill the void that Gawker.com is leaving behind. (Naturally. There’s money in it, after all.)
It’s only a matter of time before a true replacement — Gawker’s reincarnation, if you will — emerges from the smoke and rubble. Everyone will be waiting with baited breath until then.
Well, everyone except for Peter Thiel.
Lead image credit: Nick DeSantis via Forbes
0 notes
Text
They should make an Uber where 2 drivers come and someone takes your car home.
113 notes
·
View notes
Photo

Remember the Italdesign Schigera? Remember "Need for Speed III: Hot Pursuit" for PC? Yeah. So does my childhood.
0 notes